og (when it comes to pricing), going to test it out. Thanks to
you all.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rd
I'd say price range around same than X25-E was, main priorities being
predictable latency and performance. Also write wear shouldn't get an issue
when writing 150MB/s 24/7 365.
Thanks
Yours
Markus Kovero
___
zfs-discuss maili
Hi, I was wondering do you guys have any recommendations as replacement for
Intel X25-E as it is being EOL'd? Mainly as for log device.
With kind regards
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ant to writes end
up to.
If you have degraded vdev in your pool, zfs will try not to write there, and
this may be the case here as well as I don't see zpool status output.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolar
on.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Donald Stahl
Sent: 9. kesäkuuta 2011 6:27
To: Ding Honghui
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Wired write performance
Hi, also see;
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg45408.html
We hit this with Sol11 though, not sure if it's possible with sol10
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.or
solaris 11 express, not oi?
Anyway, no idea about how openindiana should work or not.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1856 metaslabs in total
93373117/1856 = 50308 average number of segments per metaslab
50308*1856*64
5975785472
5975785472/1024/1024/1024
5.56
= 5.56 GB
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tings and have useless power saving features that
could induce errors and mysterious slowness.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> On the other hand, that will only matter for reads. And the complaint is
> writes.
Actually, it also affects writes. (due checksum reads?)
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
in Solaris 11 Express while it might work fine in
osol.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are what
happens at all).
My solution for issues would be not to use R710 in anything more serious, it is
definitely platform that has more problems than I'm interested in debugging for
(:
Yours
Markus Kovero
-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ery happily now.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
as well?
Also, how are devices determined where metadata is mirrored?
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are not
in "split" mode (which does not allow daisy chaining).
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi, I'm referring to;
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913
It should be in Solaris 11 Express, has anyone tried this? How this is supposed
to work? Any documentation available?
Yours
Markus Kovero
___
zfs-discuss ma
have the
money (and certified system).
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Thanks for your help.
> I would check this out.
Hi, yes. No new support plans have been available for a while.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> I'm wondering if #6975124 could be the cause of my problem, too.
there are several zfs send (and receive) related issues with 111b. You might
seriously want to consider upgrading to more recent opensolaris (134) or
openindiana
Yours
Marku
about Solaris though
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Add about 50% to the last price list from Sun und you will get the price
> it costs now ...
Seems oracle does not want to sell its hardware so much, several month delays
with sales rep providing prices and pricing nowhere close to its competitors.
Yours
Markus
calculated risk, and I doubt you're going to take my advice. ;-)
Any other feasible alternatives for Dell hardware? Wondering, are these issues
mostly related to Nehalem-architectural problems, eg. c-states.
So is there anything good in switching hw vendor? HP anyone?
Yours
Markus Kovero
_
m will stop completely.
Hi, Broadcom issues come out as loss of network connectivity, ie. system stops
responding to ping.
This is different issue, it's like system runs out of memory or looses its
system disks (which we have seen lately)
Yours
Markus Kovero
t
(not fully supported) hardware revision causing issues?
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng similar.
Personally, I cannot recommend using them with solaris, support is not even
close to what it should be.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ar setup,
10TB dataset that can handle 100MB/s writes easily, system has 24GB of ram.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on today. It took around 12hours issuing writes
around 1,2-1,5GB/s with system that had 48GB of ram.
Anyway, setting zfs_arc_max in /etc/system seemed to do the trick, seems to
behave like expected even under heavier load. Performance is actually pretty
go
er than 134 in low disk space situation with dedup
turned on after server crashed during (terabytes of) snapshot destroy.
import took some time but it did not block IO and most time consuming part was
mounting datasets, already mounted datasets could be used during import too.
Also performance is a lo
at IF something happens to outerpool, innerpool
is not aware anymore of possibly broken data which can lead issues.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
isk
> of deadlocks? )
I haven't noticed any deadlock issues so far in low memory conditions when
doing nested pools (in replicated configuration), atleast in snv134. Maybe I
haven't tried hard enough, anyway, wouldn't log-device in innerpool help in
this situation?
Yours
Ma
s it's underlying zvol's
>pool.
Thats what I was after. Would using log-device in inner pool make things
different then? If presumed workload is eg. serving nfs.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
thoughts, if issues are performance related, they can be dealt
with to some extent, more I'm worrying if there is still deadlock issues
or other general stability issues to consider, haven't found anything useful
from bugtraq yet though.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
testpool) should just allow any writes/reads to/from volume,
not caring what they are, where as anotherpool would just work as any other
pool consisting of any other devices.
This is quite similar setup to iscsi-replicated mirror pool, where you have
redundant pool created from iscsi volu
#x27;s and use volumes from it as
log-devices? Is it even supported?
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for needed time to catch up.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, but I'd try to pin greens down to SATA1-mode (use
jumper, or force via controller). It might help a bit with these disks,
although these are not really suitable disks for any use in any raid
configurations due tler issue, which cannot be disabled in later firmware
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ase/view_bug.do?bug_id=6923585
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and RAM should be all ok I guess?
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s0'
devid: 'id1,s...@n50014ee101e8fc90/a'
phys_path:
'/p...@0,0/pci8086,3...@7/pci8086,3...@0/pci1028,1...@8/s...@21,0:a'
whole_disk: 1
DTL: 449
create_txg: 64771
Other is failed and other
ably your new disks do this too, I really don't know whats with flawkey
sata2 but I'd be quite sure it would fix your issues.
Performance drop is not even noticeable, so it's worth a try.
Yours
Markus Kovero
___
zfs-discuss mailing
al issues and needs to be replaced.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nd/or MSI.
If your system has been running for year or so, I wouldn't expect this issue to
come up, we have noted this issue with R410/R710 mostly that are manufactured
in Q4/2009-Q1/2010 (different hw revisions?)
Yours
Markus Kovero
___
zf
t comes to workarounds, disabling msi is bad if it creates latency for
network/disk controllers and disabling c-states from Nehalem processors is just
stupid (having no turbo, power saving etc).
Definitely no go for storage imo.
Yours
Markus Kovero
___
zfs-di
packet loss etc.
And as opensolaris is not "supported" OS Dell is not interested to fix these
issues.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
configuration where vdev's were added after first one's got too full.
Anyway, this is an issue, as your writes will definitely get slower after first
raidsets get more full, as mine did, writes went from 1.2GB/s to 40-50KB/s and
freeing up some space ma
-Original Message-
From: Bruno Sousa [mailto:bso...@epinfante.com]
Sent: 5. maaliskuuta 2010 13:04
To: Markus Kovero
Cc: ZFS filesystem discussion list
Subject: Re: [zfs-discuss] snv_133 mpt0 freezing machine
> Hi Markus,
> Thanks for your input and regarding the broadcom fw i a
ba.
These controllers seem to work well enough with R710. (just be sure to
downgrade bios and nicfw to 1.1.4 and 4.x more recent firmware causes network
issues:)
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; pool?
>
> --
> Terry
> --
You still can import it, Although you might loose some inflight data that was
going in during crash and it can take a while during import to finish
transactions, anyway, it will be fine.
Yours
Markus Kovero
_
ot able to see that level of performance at all.
>
> --
> Brent Jones
> br...@servuhome.net
Hi, I find comstar performance very low if using zvols under dsk, somehow using
them under rdsk and letting comstar to handle cache makes performance really
good (disks/nics becom
these 3, 4, or more day destroys has < 8 GiB of RAM on
> the
> storage server.
I've witnessed destroys that take several days with 24GB+ systems (dataset over
30TB). I guess it's just matter of how large datasets vs. how much ram.
Yours
Markus Kovero
Hi, it seems you might have somekind of hardware issue there, I have no way
reproducing this.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of bank kus
Sent: 10. tammikuuta 2010 7:21
To: zfs
Hi, while not providing complete solution, I'd suggest turning atime off so
find/rm does not change access time and possibly destroying unnecessary
snapshots before removing files, should be quicker.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolari
m pools member disks with dd
before import and checking iostat error counters for hw/transport errors?
Did you try with different set of RAM on other server, faulty ram could do this
as well.
And is your swap device okay, if it happens to swap during import into
If pool isnt rpool you might to want to boot into singleuser mode (-s after
kernel parameters on boot) remove /etc/zfs/zpool.cache and then reboot.
after that you can merely ssh into box and watch iostat while import.
Yours
Markus Kovero
___
zfs
servers with icmp-ping and
high load causes checks to fail therefore triggering unnecessary alarms.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Saso Kiselkov
Sent: 28. joulukuuta 2009 15:25
To
Hi, I threw 24GB of ram and couple latest nehalems at it and dedup=on seemed to
cripple performance without actually using much cpu or ram. it's quite unusable
like this.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
65536, content: kernel
Dec 15 16:55:07 foo genunix: [ID 10 kern.notice]
Dec 15 16:55:07 foo genunix: [ID 665016 kern.notice] ^M 64% done: 1881224 pages
dumped,
Dec 15 16:55:07 foo genunix: [ID 495082 kern.notice] dump failed: error 28
Is it just me or everlasting Monday again.
Yours
Markus
How you can setup these values to fma?
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of R.G. Keen
Sent: 14. joulukuuta 2009 20:14
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] hard
bug that occurred in
>>111-release.
>Any automatically created snapshots, perhaps?
>Casper
Nope, no snapshots.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi, if someone running 129 could try this out, turn off compression in your
pool, mkfile 10g /pool/file123, see used space and then remove the file and see
if it makes used space available again. I'm having trouble with this, reminds
me of similar bug that occurred in 111-release.
Yours
M
>From what I've noticed, if one destroys dataset that is say 50-70TB and
>reboots before destroy is finished, it can take up to several _days_ before
>it's back up again.
So, nowadays I'm doing rm -fr BEFORE issuing zfs destroy whenever possible.
Yours
Markus Kovero
up.
So how long you've waited, have you tried removing /etc/zfs/zpool.cache and
then booting into snv_128, doing import and possibly watching disk with iostat
to see is there any activity?
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailt
We actually tried this, although using sol10-version of mpt-driver.
Surprisingly it didn't work :-)
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Mark Johnson
Sent: 1. joulukuuta 2009 15:
Have you tried another SAS-cable?
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of M P
Sent: 11. marraskuuta 2009 21:05
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS on JBOD
issues).
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of M P
Sent: 11. marraskuuta 2009 18:08
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not
orced into SATA1-mode, I believe this is known issue with newer 2TB disks
and some other disk controllers and may be caused by bad cabling or
connectivity.
We have never witnessed this behaviour with SAS (fujitsu,ibm..) also. All this
happens with snv 118,122,123 and 125.
Yours
Markus K
How do you estimate needed queue depth if one has say 64 to 128 disks sitting
behind LSI?
Is it bad idea having queuedepth 1?
Yours
Markus Kovero
Lähettäjä: zfs-discuss-boun...@opensolaris.org
[zfs-discuss-boun...@opensolaris.org] käyttäjän Richard
ONLINE 0 0 0
c8t149d0 ONLINE 0 0 0
c8t91d0ONLINE 0 0 0
c8t94d0ONLINE 0 0 0
c8t95d0ONLINE 0 0 0
Yours
Markus
that I found workaround to be running snoop with
promiscuous mode disabled on interfaces suffering lag, this did make
interruptions go away. Is this somekind cpu/irq scheduling issue?
Behaviour was noticed on two different platform and with two different nics
(bge and e1000).
Yours
Markus K
Is it possible to migrate data from iscsitgt for comstar iscsi target? I guess
comstar wants metadata at beginning of volume and this makes things difficult?
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
It's possible to do 3-way (or more) mirrors too, so you may achieve better
redundancy than raidz2/3
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Marty Scholes
Sent: 16. syyskuuta 2009 19:
Hi, I managed to test this out, it seems iscsitgt performance is suboptimal
with this setup but somehow comstar maxes out gige easily, no performance
issues there.
Yours
Markus Kovero
-Original Message-
From: Maurice Volaski [mailto:maurice.vola...@einstein.yu.edu]
Sent: 11. syyskuuta
I believe failover is best to be done manually just to be sure active node is
really dead before importing it on another node, otherwise there could be
serious issues I think.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun
This also makes failover more easy, as volumes are already shared via iscsi on
both nodes.
I have to poke it next week to see performance numbers, I could imagine it
plays within expected iscsi performance, or it should atleast.
Yours
Markus Kovero
-Original Message-
From: Richard
here?
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Couple months, nope. I guess there is this DOS utility provided by WD that
allows you change TLER settings
having TLER disabled can be problem, faulty disks timeout randomly and zfs
doesn't always want to mark them as failed, sometimes it does though.
Yours
Markus Kovero
-Original Me
We've been using caviar black 1TB with disk configurations consisting 64 disks
or more. They are working just fine.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Eugen Leitl
Sent: 11. syys
Hi, I noticed that counters will not get updated if data amount increases
during scrub/resilver, so if application has written new data during scrub,
counter will not give realistic estimate.
This happens with resilvering and scrub, somebody could fix this?
Yours
Markus Kovero
-Original
Please see iostat -xen if there is transport or hw errors generated by say,
device timeouts or bad cables etc. Consumer disks usually just timeout time to
time while on load when RE-versions usually report error.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun
on another disk set.
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
btw, there's coming new Intel X25-M (G2) next month that will offer better
random read/writes than E-series and seriously cheap pricetag, worth for a try
I'd say.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-di
goes thru.
Somebody said that zpool import got faster on snv118, but I don't have real
information on that yet.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Victor Latushkin
Sent: 29.
Oh well, whole system seems to be deadlocked.
nice. Little too keen keeping data safe :-P
Yours
Markus Kovero
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Markus Kovero
Sent: 27. heinäkuuta 2009 13:39
To: zfs-discuss@opensolaris.org
Subject
Hi, how come zfs destroy being so slow, eg. destroying 6TB dataset renders zfs
admin commands useless for time being, in this case for hours?
(running osol 111b with latest patches.)
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss
Hi, thanks for pointing out issue, we haven't run updates on server yet.
Yours
Markus Kovero
-Original Message-
From: Henrik Johansson [mailto:henr...@henkis.net]
Sent: 24. heinäkuuta 2009 12:26
To: Markus Kovero
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] No file
Yes, server has been rebooted several times and there is no available space, is
it possible to delete ghosts that zdb sees somehow? how this can happen?
Yours
Markus Kovero
-Original Message-
From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias
Pantzare
Sent: 24
:56
To: Markus Kovero
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] No files but pool is full?
On Fri, Jul 24, 2009 at 09:33, Markus Kovero wrote:
> During our tests we noticed very disturbing behavior, what would be causing
> this?
>
> System is running latest stable
0x
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I would be intrested in how to roll-back to certain txg-points in case of
disaster, that was what Russel was after anyway.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Miles Nordin
Sent: 19
90 matches
Mail list logo