Re: [zfs-discuss] Replacement for X25-E

2011-09-26 Thread Markus Kovero
og (when it comes to pricing), going to test it out. Thanks to you all. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Replacement for X25-E

2011-09-21 Thread Markus Kovero
rd I'd say price range around same than X25-E was, main priorities being predictable latency and performance. Also write wear shouldn't get an issue when writing 150MB/s 24/7 365. Thanks Yours Markus Kovero ___ zfs-discuss maili

[zfs-discuss] Replacement for X25-E

2011-09-20 Thread Markus Kovero
Hi, I was wondering do you guys have any recommendations as replacement for Intel X25-E as it is being EOL'd? Mainly as for log device. With kind regards Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] about write balancing

2011-06-30 Thread Markus Kovero
ant to writes end up to. If you have degraded vdev in your pool, zfs will try not to write there, and this may be the case here as well as I don't see zpool status output. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolar

Re: [zfs-discuss] Wired write performance problem

2011-06-09 Thread Markus Kovero
on. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Donald Stahl Sent: 9. kesäkuuta 2011 6:27 To: Ding Honghui Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Wired write performance

Re: [zfs-discuss] Wired write performance problem

2011-06-08 Thread Markus Kovero
Hi, also see; http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg45408.html We hit this with Sol11 though, not sure if it's possible with sol10 Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.or

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-28 Thread Markus Kovero
solaris 11 express, not oi? Anyway, no idea about how openindiana should work or not. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-27 Thread Markus Kovero
1856 metaslabs in total 93373117/1856 = 50308 average number of segments per metaslab 50308*1856*64 5975785472 5975785472/1024/1024/1024 5.56 = 5.56 GB Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] What drives?

2011-02-24 Thread Markus Kovero
tings and have useless power saving features that could induce errors and mysterious slowness. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-11 Thread Markus Kovero
> On the other hand, that will only matter for reads. And the complaint is > writes. Actually, it also affects writes. (due checksum reads?) Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolar

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-11 Thread Markus Kovero
in Solaris 11 Express while it might work fine in osol. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Running on Dell hardware?

2010-12-12 Thread Markus Kovero
are what happens at all). My solution for issues would be not to use R710 in anything more serious, it is definitely platform that has more problems than I'm interested in debugging for (: Yours Markus Kovero - ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool does not like iSCSI ?

2010-12-01 Thread Markus Kovero
ery happily now. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] RAID-Z/mirror hybrid allocator

2010-11-22 Thread Markus Kovero
as well? Also, how are devices determined where metadata is mirrored? Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Markus Kovero
are not in "split" mode (which does not allow daisy chaining). Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] RAID-Z/mirror hybrid allocator

2010-11-18 Thread Markus Kovero
Hi, I'm referring to; http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913 It should be in Solaris 11 Express, has anyone tried this? How this is supposed to work? Any documentation available? Yours Markus Kovero ___ zfs-discuss ma

Re: [zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-11-17 Thread Markus Kovero
have the money (and certified system). Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] is opensolaris support ended?

2010-11-11 Thread Markus Kovero
> Thanks for your help. > I would check this out. Hi, yes. No new support plans have been available for a while. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-11-08 Thread Markus Kovero
> I'm wondering if #6975124 could be the cause of my problem, too. there are several zfs send (and receive) related issues with 111b. You might seriously want to consider upgrading to more recent opensolaris (134) or openindiana Yours Marku

Re: [zfs-discuss] zpool does not like iSCSI ?

2010-11-03 Thread Markus Kovero
about Solaris though Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Running on Dell hardware?

2010-10-26 Thread Markus Kovero
> Add about 50% to the last price list from Sun und you will get the price > it costs now ... Seems oracle does not want to sell its hardware so much, several month delays with sales rep providing prices and pricing nowhere close to its competitors. Yours Markus

Re: [zfs-discuss] Running on Dell hardware?

2010-10-25 Thread Markus Kovero
calculated risk, and I doubt you're going to take my advice.  ;-) Any other feasible alternatives for Dell hardware? Wondering, are these issues mostly related to Nehalem-architectural problems, eg. c-states. So is there anything good in switching hw vendor? HP anyone? Yours Markus Kovero _

Re: [zfs-discuss] Running on Dell hardware?

2010-10-25 Thread Markus Kovero
m will stop completely. Hi, Broadcom issues come out as loss of network connectivity, ie. system stops responding to ping. This is different issue, it's like system runs out of memory or looses its system disks (which we have seen lately) Yours Markus Kovero

Re: [zfs-discuss] Running on Dell hardware?

2010-10-13 Thread Markus Kovero
t (not fully supported) hardware revision causing issues? Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Running on Dell hardware?

2010-10-13 Thread Markus Kovero
ng similar. Personally, I cannot recommend using them with solaris, support is not even close to what it should be. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] dedup status

2010-10-01 Thread Markus Kovero
ar setup, 10TB dataset that can handle 100MB/s writes easily, system has 24GB of ram. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pools inside pools

2010-09-28 Thread Markus Kovero
on today. It took around 12hours issuing writes around 1,2-1,5GB/s with system that had 48GB of ram. Anyway, setting zfs_arc_max in /etc/system seemed to do the trick, seems to behave like expected even under heavier load. Performance is actually pretty go

Re: [zfs-discuss] dedup testing?

2010-09-25 Thread Markus Kovero
er than 134 in low disk space situation with dedup turned on after server crashed during (terabytes of) snapshot destroy. import took some time but it did not block IO and most time consuming part was mounting datasets, already mounted datasets could be used during import too. Also performance is a lo

Re: [zfs-discuss] Pools inside pools

2010-09-23 Thread Markus Kovero
at IF something happens to outerpool, innerpool is not aware anymore of possibly broken data which can lead issues. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pools inside pools

2010-09-22 Thread Markus Kovero
isk > of deadlocks? ) I haven't noticed any deadlock issues so far in low memory conditions when doing nested pools (in replicated configuration), atleast in snv134. Maybe I haven't tried hard enough, anyway, wouldn't log-device in innerpool help in this situation? Yours Ma

Re: [zfs-discuss] Pools inside pools

2010-09-22 Thread Markus Kovero
s it's underlying zvol's >pool. Thats what I was after. Would using log-device in inner pool make things different then? If presumed workload is eg. serving nfs. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.or

Re: [zfs-discuss] Pools inside pools

2010-09-22 Thread Markus Kovero
thoughts, if issues are performance related, they can be dealt with to some extent, more I'm worrying if there is still deadlock issues or other general stability issues to consider, haven't found anything useful from bugtraq yet though. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pools inside pools

2010-09-22 Thread Markus Kovero
testpool) should just allow any writes/reads to/from volume, not caring what they are, where as anotherpool would just work as any other pool consisting of any other devices. This is quite similar setup to iscsi-replicated mirror pool, where you have redundant pool created from iscsi volu

[zfs-discuss] Pools inside pools

2010-09-22 Thread Markus Kovero
#x27;s and use volumes from it as log-devices? Is it even supported? Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] resilver that never finishes

2010-09-19 Thread Markus Kovero
for needed time to catch up. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-09 Thread Markus Kovero
, but I'd try to pin greens down to SATA1-mode (use jumper, or force via controller). It might help a bit with these disks, although these are not really suitable disks for any use in any raid configurations due tler issue, which cannot be disabled in later firmware

Re: [zfs-discuss] dedup status

2010-05-16 Thread Markus Kovero
Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-07 Thread Markus Kovero
ase/view_bug.do?bug_id=6923585 Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] b134 pool borked!

2010-05-05 Thread Markus Kovero
, and RAM should be all ok I guess? Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool lists 2 controllers the same, how do I replace one?

2010-04-23 Thread Markus Kovero
s0' devid: 'id1,s...@n50014ee101e8fc90/a' phys_path: '/p...@0,0/pci8086,3...@7/pci8086,3...@0/pci1028,1...@8/s...@21,0:a' whole_disk: 1 DTL: 449 create_txg: 64771 Other is failed and other

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-04-10 Thread Markus Kovero
ably your new disks do this too, I really don't know whats with flawkey sata2 but I'd be quite sure it would fix your issues. Performance drop is not even noticeable, so it's worth a try. Yours Markus Kovero ___ zfs-discuss mailing

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-04-10 Thread Markus Kovero
al issues and needs to be replaced. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-07 Thread Markus Kovero
nd/or MSI. If your system has been running for year or so, I wouldn't expect this issue to come up, we have noted this issue with R410/R710 mostly that are manufactured in Q4/2009-Q1/2010 (different hw revisions?) Yours Markus Kovero ___ zf

Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-06 Thread Markus Kovero
t comes to workarounds, disabling msi is bad if it creates latency for network/disk controllers and disabling c-states from Nehalem processors is just stupid (having no turbo, power saving etc). Definitely no go for storage imo. Yours Markus Kovero ___ zfs-di

Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-06 Thread Markus Kovero
packet loss etc. And as opensolaris is not "supported" OS Dell is not interested to fix these issues. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool vdev imbalance - getting worse?

2010-03-25 Thread Markus Kovero
configuration where vdev's were added after first one's got too full. Anyway, this is an issue, as your writes will definitely get slower after first raidsets get more full, as mine did, writes went from 1.2GB/s to 40-50KB/s and freeing up some space ma

Re: [zfs-discuss] snv_133 mpt0 freezing machine

2010-03-05 Thread Markus Kovero
-Original Message- From: Bruno Sousa [mailto:bso...@epinfante.com] Sent: 5. maaliskuuta 2010 13:04 To: Markus Kovero Cc: ZFS filesystem discussion list Subject: Re: [zfs-discuss] snv_133 mpt0 freezing machine > Hi Markus, > Thanks for your input and regarding the broadcom fw i a

Re: [zfs-discuss] snv_133 mpt0 freezing machine

2010-03-05 Thread Markus Kovero
ba. These controllers seem to work well enough with R710. (just be sure to downgrade bios and nicfw to 1.1.4 and 4.x more recent firmware causes network issues:) Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Disk controllers changing the names of disks

2010-02-19 Thread Markus Kovero
t; pool? > > -- > Terry > -- You still can import it, Although you might loose some inflight data that was going in during crash and it can take a while during import to finish transactions, anyway, it will be fine. Yours Markus Kovero _

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Markus Kovero
ot able to see that level of performance at all. > > -- > Brent Jones > br...@servuhome.net Hi, I find comstar performance very low if using zvols under dsk, somehow using them under rdsk and letting comstar to handle cache makes performance really good (disks/nics becom

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-15 Thread Markus Kovero
these 3, 4, or more day destroys has < 8 GiB of RAM on > the > storage server. I've witnessed destroys that take several days with 24GB+ systems (dataset over 30TB). I guess it's just matter of how large datasets vs. how much ram. Yours Markus Kovero

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Markus Kovero
Hi, it seems you might have somekind of hardware issue there, I have no way reproducing this. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of bank kus Sent: 10. tammikuuta 2010 7:21 To: zfs

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Markus Kovero
Hi, while not providing complete solution, I'd suggest turning atime off so find/rm does not change access time and possibly destroying unnecessary snapshots before removing files, should be quicker. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolari

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2010-01-02 Thread Markus Kovero
m pools member disks with dd before import and checking iostat error counters for hw/transport errors? Did you try with different set of RAM on other server, faulty ram could do this as well. And is your swap device okay, if it happens to swap during import into

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2010-01-02 Thread Markus Kovero
If pool isnt rpool you might to want to boot into singleuser mode (-s after kernel parameters on boot) remove /etc/zfs/zpool.cache and then reboot. after that you can merely ssh into box and watch iostat while import. Yours Markus Kovero ___ zfs

Re: [zfs-discuss] ZFS write bursts cause short app stalls

2009-12-28 Thread Markus Kovero
servers with icmp-ping and high load causes checks to fail therefore triggering unnecessary alarms. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Saso Kiselkov Sent: 28. joulukuuta 2009 15:25 To

Re: [zfs-discuss] Troubleshooting dedup performance

2009-12-23 Thread Markus Kovero
Hi, I threw 24GB of ram and couple latest nehalems at it and dedup=on seemed to cripple performance without actually using much cpu or ram. it's quite unusable like this. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-disc

[zfs-discuss] snv_129 dedup panic

2009-12-15 Thread Markus Kovero
65536, content: kernel Dec 15 16:55:07 foo genunix: [ID 10 kern.notice] Dec 15 16:55:07 foo genunix: [ID 665016 kern.notice] ^M 64% done: 1881224 pages dumped, Dec 15 16:55:07 foo genunix: [ID 495082 kern.notice] dump failed: error 28 Is it just me or everlasting Monday again. Yours Markus

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-14 Thread Markus Kovero
How you can setup these values to fma? Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of R.G. Keen Sent: 14. joulukuuta 2009 20:14 To: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] hard

Re: [zfs-discuss] Space not freed?

2009-12-14 Thread Markus Kovero
bug that occurred in >>111-release. >Any automatically created snapshots, perhaps? >Casper Nope, no snapshots. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Space not freed?

2009-12-14 Thread Markus Kovero
Hi, if someone running 129 could try this out, turn off compression in your pool, mkfile 10g /pool/file123, see used space and then remove the file and see if it makes used space available again. I'm having trouble with this, reminds me of similar bug that occurred in 111-release. Yours M

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-08 Thread Markus Kovero
>From what I've noticed, if one destroys dataset that is say 50-70TB and >reboots before destroy is finished, it can take up to several _days_ before >it's back up again. So, nowadays I'm doing rm -fr BEFORE issuing zfs destroy whenever possible. Yours Markus Kovero

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-08 Thread Markus Kovero
up. So how long you've waited, have you tried removing /etc/zfs/zpool.cache and then booting into snv_128, doing import and possibly watching disk with iostat to see is there any activity? Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailt

Re: [zfs-discuss] mpt errors on snv 127

2009-12-01 Thread Markus Kovero
We actually tried this, although using sol10-version of mpt-driver. Surprisingly it didn't work :-) Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Mark Johnson Sent: 1. joulukuuta 2009 15:

Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-11 Thread Markus Kovero
Have you tried another SAS-cable? Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of M P Sent: 11. marraskuuta 2009 21:05 To: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] ZFS on JBOD

Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-11 Thread Markus Kovero
issues). Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of M P Sent: 11. marraskuuta 2009 18:08 To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-24 Thread Markus Kovero
orced into SATA1-mode, I believe this is known issue with newer 2TB disks and some other disk controllers and may be caused by bad cabling or connectivity. We have never witnessed this behaviour with SAS (fujitsu,ibm..) also. All this happens with snv 118,122,123 and 125. Yours Markus K

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-24 Thread Markus Kovero
How do you estimate needed queue depth if one has say 64 to 128 disks sitting behind LSI? Is it bad idea having queuedepth 1? Yours Markus Kovero Lähettäjä: zfs-discuss-boun...@opensolaris.org [zfs-discuss-boun...@opensolaris.org] käyttäjän Richard

[zfs-discuss] Numbered vdevs

2009-10-19 Thread Markus Kovero
ONLINE 0 0 0 c8t149d0 ONLINE 0 0 0 c8t91d0ONLINE 0 0 0 c8t94d0ONLINE 0 0 0 c8t95d0ONLINE 0 0 0 Yours Markus

[zfs-discuss] Unusual latency issues

2009-09-28 Thread Markus Kovero
that I found workaround to be running snoop with promiscuous mode disabled on interfaces suffering lag, this did make interruptions go away. Is this somekind cpu/irq scheduling issue? Behaviour was noticed on two different platform and with two different nics (bge and e1000). Yours Markus K

[zfs-discuss] Migrate from iscsitgt to comstar?

2009-09-21 Thread Markus Kovero
Is it possible to migrate data from iscsitgt for comstar iscsi target? I guess comstar wants metadata at beginning of volume and this makes things difficult? Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-16 Thread Markus Kovero
It's possible to do 3-way (or more) mirrors too, so you may achieve better redundancy than raidz2/3 Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Marty Scholes Sent: 16. syyskuuta 2009 19:

Re: [zfs-discuss] sync replication easy way?

2009-09-16 Thread Markus Kovero
Hi, I managed to test this out, it seems iscsitgt performance is suboptimal with this setup but somehow comstar maxes out gige easily, no performance issues there. Yours Markus Kovero -Original Message- From: Maurice Volaski [mailto:maurice.vola...@einstein.yu.edu] Sent: 11. syyskuuta

Re: [zfs-discuss] sync replication easy way?

2009-09-11 Thread Markus Kovero
I believe failover is best to be done manually just to be sure active node is really dead before importing it on another node, otherwise there could be serious issues I think. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun

Re: [zfs-discuss] sync replication easy way?

2009-09-11 Thread Markus Kovero
This also makes failover more easy, as volumes are already shared via iscsi on both nodes. I have to poke it next week to see performance numbers, I could imagine it plays within expected iscsi performance, or it should atleast. Yours Markus Kovero -Original Message- From: Richard

[zfs-discuss] sync replication easy way?

2009-09-11 Thread Markus Kovero
here? Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Markus Kovero
Couple months, nope. I guess there is this DOS utility provided by WD that allows you change TLER settings having TLER disabled can be problem, faulty disks timeout randomly and zfs doesn't always want to mark them as failed, sometimes it does though. Yours Markus Kovero -Original Me

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Markus Kovero
We've been using caviar black 1TB with disk configurations consisting 64 disks or more. They are working just fine. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Eugen Leitl Sent: 11. syys

Re: [zfs-discuss] This is the scrub that never ends...

2009-09-07 Thread Markus Kovero
Hi, I noticed that counters will not get updated if data amount increases during scrub/resilver, so if application has written new data during scrub, counter will not give realistic estimate. This happens with resilvering and scrub, somebody could fix this? Yours Markus Kovero -Original

Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Markus Kovero
Please see iostat -xen if there is transport or hw errors generated by say, device timeouts or bad cables etc. Consumer disks usually just timeout time to time while on load when RE-versions usually report error. Yours Markus Kovero -Original Message- From: zfs-discuss-boun

[zfs-discuss] possible resilver bugs

2009-08-21 Thread Markus Kovero
on another disk set. Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Markus Kovero
btw, there's coming new Intel X25-M (G2) next month that will offer better random read/writes than E-series and seriously cheap pricetag, worth for a try I'd say. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-di

Re: [zfs-discuss] zpool import hungs up forever...

2009-07-29 Thread Markus Kovero
goes thru. Somebody said that zpool import got faster on snv118, but I don't have real information on that yet. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Victor Latushkin Sent: 29.

Re: [zfs-discuss] zfs destroy slow?

2009-07-27 Thread Markus Kovero
Oh well, whole system seems to be deadlocked. nice. Little too keen keeping data safe :-P Yours Markus Kovero From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Markus Kovero Sent: 27. heinäkuuta 2009 13:39 To: zfs-discuss@opensolaris.org Subject

[zfs-discuss] zfs destroy slow?

2009-07-27 Thread Markus Kovero
Hi, how come zfs destroy being so slow, eg. destroying 6TB dataset renders zfs admin commands useless for time being, in this case for hours? (running osol 111b with latest patches.) Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Markus Kovero
Hi, thanks for pointing out issue, we haven't run updates on server yet. Yours Markus Kovero -Original Message- From: Henrik Johansson [mailto:henr...@henkis.net] Sent: 24. heinäkuuta 2009 12:26 To: Markus Kovero Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] No file

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Markus Kovero
Yes, server has been rebooted several times and there is no available space, is it possible to delete ghosts that zdb sees somehow? how this can happen? Yours Markus Kovero -Original Message- From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias Pantzare Sent: 24

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Markus Kovero
:56 To: Markus Kovero Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] No files but pool is full? On Fri, Jul 24, 2009 at 09:33, Markus Kovero wrote: > During our tests we noticed very disturbing behavior, what would be causing > this? > > System is running latest stable

[zfs-discuss] No files but pool is full?

2009-07-24 Thread Markus Kovero
0x Yours Markus Kovero ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Markus Kovero
I would be intrested in how to roll-back to certain txg-points in case of disaster, that was what Russel was after anyway. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Miles Nordin Sent: 19