[zfs-discuss] Cannot replace a replacing device

2010-03-28 Thread Jim
I had a drive fail and replaced it with a new drive. During the resilvering process the new drive had write faults and was taken offline. These faults were caused by a broken SATA cable (drive checked with Manufacturers software and all ok). New cable fixed the the failure. However, now the driv

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-28 Thread Jim
Yes - but it does nothing. The drive remains FAULTED. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-29 Thread Jim
Thanks for the suggestion, but have tried detaching but it refuses reporting no valid replicas. Capture below. C3P0# zpool status pool: tank state: DEGRADED scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-29 Thread Jim
Thanks for the suggestion, but have tried detaching but it refuses reporting no valid replicas. Capture below. C3P0# zpool status pool: tank state: DEGRADED scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 ad4 ONLINE 0 0 0 ad6 ONLINE 0 0 0 repla

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-30 Thread Jim
Thanks - have run it and returns pretty quickly. Given the output (attached) what action can I take? Thanks James -- This message posted from opensolaris.orgDirty time logs: tank outage [300718,301073] length 356 outage [301138,301139] length 2 outage [301149,30

[zfs-discuss] Why RAID 5 stops working in 2009

2008-07-03 Thread Jim
Anyone here read the article "Why RAID 5 stops working in 2009" at http://blogs.zdnet.com/storage/?p=162 Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux if the RAID has to be rebuilt because of a faulty disk? I imagine so because of the physical constraints that p

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-27 Thread Jim Horng
Unclear what you want to do? What's the goal for this excise? If you want to replace the pool with larger disks and the pool is in mirror or raidz. You just replace one disk at a time and allow the pool to rebuild it self. Once all the disk has been replace, it will atomically realize the disk

Re: [zfs-discuss] Solaris 10 default caching segmap/vpm size

2010-04-27 Thread Jim Mauro
eed memory, ZFS will release memory being used by the ARC. But, if no one else wants it /jim On Apr 27, 2010, at 9:07 PM, Brad wrote: > Whats the default size of the file system cache for Solaris 10 x86 and can it > be tuned? > I read various posts on the subject and its confusing.. &

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
For this type of migration a downtime is required. However, it can be reduce to only a few hours to a few minutes depending how much change need to be synced. I have done this many times on a NetApp Filer but can be apply to zfs as well. First thing is consider is only do the migration once so

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
So on the point of not need an migration back. Even at 144 disk. they won't be on the same raid group. So figure out what is the best raid group size for you since zfs don't support changing number of disk in raidz yet. I usually use the number of the slots per shelf. or a good number is 7~10

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
Sorry, I need to correct myself. Mirror luns on the windows side to switch storage pool under it is a great idea and I think you can do this without downtime. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
I understand your point. however in most production system the selves are added incrementally so make sense to be related to number of slots per shelf. and in most case withstand a shelf failure is to much of overhead on storage any are. for example in his case he will have to configure 1+0 ra

[zfs-discuss] OpenSolaris snv_134 zfs pool hangs after some time with dedup=on

2010-04-28 Thread Jim Horng
Sorry for the double post but I think this was better suite for zfs forum. I am running OpenSolaris snv_134 as a file server in a test environment, testing deduplication. I am transferring large amount of data from our production server via using rsync. The Data pool is on a separated raidz1-0

Re: [zfs-discuss] OpenSolaris snv_134 zfs pool hangs after some time with dedup=on

2010-04-28 Thread Jim Horng
This is not a performance issue. The rsync will hang hard and one of the child process can not be killed (I assume it's the one running on the zfs). the command gets slower I am referring to the output of the file system commands (zpool, zfs, df, du, etc) from the different shell. I left the

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
> 3 shelves with 2 controllers each. 48 drive per > shelf. These are Fibrechannel attached. We would like > all 144 drives added to the same large pool. I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across controllers within vdevs. also may want to leave a least 1 spare

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
> Why would you recommend a spare for raidz2 or raidz3? > -- richard Spare is to minimize the reconstruction time. Because remember a vdev can not start resilvering until there is a spare disk available. And with disks as big as they are today, resilvering also take many hours. I rather have

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Jim Horng
> Would your opinion change if the disks you used took > 7 days to resilver? > > Bob That will only make a stronger case that hot spare is absolutely needed. This will also make a strong case for choosing raidz3 over raidz2 as well as vdev smaller number of disks. -- This message posted from op

Re: [zfs-discuss] Panic when deleting a large dedup snapshot

2010-04-30 Thread Jim Horng
Looks like I am hitting the same issue now from the earlier post that you responded. http://opensolaris.org/jive/thread.jspa?threadID=128532&tstart=15 Continue my test migration with the dedup=off and synced couple more file systems. I decided the merge two of the file systems together by copyi

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Jim Dunham
uild 136..., iSCSI Target Daemon (and ZFS shareiscsi) are gone, so you will need to reconfigure your two ZVOLs 'vol01/zvol01' and 'vol01/zvol02', under COMSTAR soon. http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Port

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-05 Thread Jim Dunham
Przem, > Anybody has an idea what I can do about it? zfs set shareiscsi=off vol01/zvol01 zfs set shareiscsi=off vol01/zvol02 Doing this will have no impact on the LUs if configured under COMSTAR. This will also transparently go away with b136, when ZFS ignores the shareiscsi property. -

[zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-09 Thread Jim Horng
Okay, so after some test with dedup on snv_134. I decided we can not to use dedup feature for the time being. While unable to destroy a dedupped file system. I decided to migrate the file system to another pool then destroy the pool. (see below) http://opensolaris.org/jive/thread.jspa?threadI

Re: [zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-09 Thread Jim Horng
size of snapshot? r...@filearch1:/var/adm# zfs list mpool/export/projects/project1...@today NAMEUSED AVAIL REFER MOUNTPOINT mpool/export/projects/project1...@today 0 - 407G - r...@filearch1:/var/adm# zfs list tank/export/projects/project1...@

Re: [zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-10 Thread Jim Horng
I was expecting zfs send tank/export/projects/project1...@today would send everything up to @today. That is the only snapshot and I am not using the -i options. The things worries me is that tank/export/projects/project1_nb was the first file system that I tested with full dedup and compression

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-13 Thread Jim Horng
When I boot up without the disks in the slots. I manually bring the pool on line with zpool clear I believe that was what you were missing from your command. However I did not try to change controller. Hopefully you only been unplug disks while the system is turn off. If that's case the

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jim Horng
You may or may not need to add the log device back. zfs clear should bring the pool online. either way shouldn't affect the data. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

[zfs-discuss] Can I recover filesystem from an offline pool?

2010-05-25 Thread Jim Horng
Hi All, is there any procedure to recover a filesystem from an office pool or bring a pool on-line quickly. Here is my issue. * One 700GB Zpool * 1 filesystem with compression turn on (only using few MB) * Try to migrated another filesystem from a different pool with dedup stream. with zfs send

Re: [zfs-discuss] Can I recover filesystem from an offline pool?

2010-05-29 Thread Jim Horng
10GB of memory + 5 days later. The pool was imported. this file server is a virtual machine. I allocated 2GB of memory and 2 CPU cores assume this was enough to mange 6 TB (6x 1TB disks). While the pool I am try to recover is only 700 GB and not the 6TB pool I am try to migrate. So I decided t

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2010-06-15 Thread Jim Klimov
sets. i thought of local zones first, but most people may init them by packages (though zoneadm says it is copying thousands of files), so /etc/skel might be a better example of the usecase - though nearly useless ,) jim -- This message posted from opensolaris.org _

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2010-06-15 Thread Jim Klimov
sets. i thought of local zones first, but most people may init them by packages (though zoneadm says it is copying thousands of files), so /etc/skel might be a better example of the usecase - though nearly useless ,) jim -- This message posted from opensolaris.org _

Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]

2010-07-31 Thread Jim Doyle
A solution to this problem would be my early Christmas present! Here is how I lost access to an otherwise healthy mirrored pool two months ago: Box running snv_130 with two disks in a mirror and an iRAM battery-backed ZIL device was shutdown orderly and powered down normally. While I was away o

Re: [zfs-discuss] Adding ZIL to pool questions

2010-08-01 Thread Jim Doyle
times per hour, plus updates to files' atime attr - and that particular scale of operation will be greatly improved by an NVRAM ZIL. If I were to use a ZIL again, i'd use something like the ACARD DDR-2 SATA boxes, and not an SSD or an iRAM. -- Jim -- This message po

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-08-06 Thread Jim Barker
I have been looking at why a zfs receive operation is terribly slow and one observation that seemed directly linked to why it is slow is that at any one time one of the cpus is pegged at 100% sys while the other 5 in my case are relatively quiet. I haven't dug any deeper than that, but was curi

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-08-06 Thread Jim Barker
Just an update, I had a ticket open with Sun regarding this and it looks like they have a CR for what I was seeing (6975124). -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.or

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-08-06 Thread Jim Barker
I had found a way to get around the freeze, but I guess I just delayed the freeze a little longer. I provided Oracle some explorer output and a crash dump to analyze and this is the data they used to provide the information I passed on. Jim Barker -- This

Re: [zfs-discuss] Solaris License with ZFS USER quotas?

2009-09-28 Thread Jim Grisanzio
ld an international group in English for the Tokyo OSUG. There are bi-lingual westerners and Japanese on both lists, and we have events in Yoga as well. http://mail.opensolaris.org/mailman/listinfo/ug-tsug (English ) http://mail.opensolaris.org/mailman/listinfo/ug-jposug (Japanese) Jim -- http://blogs.su

Re: [zfs-discuss] iscsi/comstar performance

2009-10-19 Thread Jim Dunham
king store device is not a ZVOL. Note: For ZVOL support, there is a corresponding ZFS storage pool change to support this functionality, so a "zpool upgrade ..." to version 16 is required: # zpool upgrade -v . . 16 stmf property support - Jim The options seem

Re: [zfs-discuss] Performance problems with Thumper and >7TB ZFS pool using RAIDZ2

2009-10-24 Thread Jim Mauro
Posting to zfs-discuss. There's no reason this needs to be kept confidential. 5-disk RAIDZ2 - doesn't that equate to only 3 data disks? Seems pointless - they'd be much better off using mirrors, which is a better choice for random IO... Looking at this now... /jim Jeff Savit

[zfs-discuss] (home NAS) zfs and spinning down of drives

2009-11-04 Thread Jim Klimov
created inside a HSM volume, so that I have the flexibility of ZFS and offline-storage capabilities of HSM? -- Thanks for any replies, including statements that my ideas are insane or my views are outdated ;) But constructive ones are more appreciated ;) //

Re: [zfs-discuss] (home NAS) zfs and spinning down of drives

2009-11-04 Thread Jim Klimov
Thanks for the link, but the main concern in spinning down drives of a ZFS pool is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a transaction group (TXG) which requires a synchronous write of metadata to disk. I mentioned reading many blogs/forums on the matter, and some

Re: [zfs-discuss] ZFS dedup issue

2009-12-02 Thread Jim Klimov
r one? In general, were there any stability issues with snv_128 during internal/BFU testing? TIA, //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS- When do you add more memory?

2009-12-23 Thread Jim Mauro
rformance of the customers workload. As an aside, there's nothing about this that requires it be posted to zfs-discuss-confidential. I posted to zfs-disc...@opensolaris.org. Thanks, /jim Anthony Benenati wrote: Jim, The issue with using scan rate alone is if you are looking for why you

Re: [zfs-discuss] ZFS- When do you add more memory?

2009-12-23 Thread Jim Laurent
Think he's looking for a single, intuitively obvious, easy to acces indicator of memory usage along the lines of the vmstat free column (before ZFS) that show the current amount of free RAM. On Dec 23, 2009, at 4:09 PM, Jim Mauro wrote: > Hi Anthony - > > I don't get

[zfs-discuss] Recovering a broken mirror

2010-01-13 Thread Jim Sloey
We have a production SunFireV240 that had a zfs mirror until this week. One of the drives (c1t3d0) in the mirror failed. The system was shutdown and the bad disk replaced without an export. I don't know what happened next but by the time I got involved there was no evidence that the remaining go

Re: [zfs-discuss] Recovering a broken mirror

2010-01-13 Thread Jim Sloey
No. Only slice 6 from what I understand. I didn't create this (the person who did has left the company) and all I know is that the pool was mounted on /oraprod before it faulted. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Recovering a broken mirror

2010-01-15 Thread Jim Sloey
Never mind. It looks like the controller is flakey. Neither disk in the mirror is clean. Attempts to backup and recover the remaining disk produced I/O errors that were traced to the controller. Thanks for your help Victor. -- This message posted from opensolaris.org _

Re: [zfs-discuss] Oracle Performance - ZFS vs UFS

2010-02-13 Thread Jim Mauro
don't run them at 90% full. Read the link Richard sent for some additional information. Thanks, /jim Tony MacDoodle wrote: Was wondering if anyone has had any performance issues with Oracle running on ZFS as compa

[zfs-discuss] ZFS on a RAID Adapter?

2011-10-03 Thread Jim Klimov
o that any end-user OS (not only ones directly suppporting ZFS) would benefit from ZFS resiliency, snapshots, caching, etc. with the simplicity of using a RAID adapter's exported volumes. Now, it is just a thought. But I wonder if it's possible... Or useful? :) Or if anyone has already done

[zfs-discuss] Fwd: Re: zvol space consumption vs ashift, metadata packing

2011-10-04 Thread Jim Klimov
ble - to reduce wear and increase efficiency - but the main idea is hopefully simple. //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-10 Thread Jim Klimov
ks ASAP. So beside an invitation to bash these ideas and explain why they are wrong an impossible - if they are - there is also a hope to stir up a constructive discussion finally leading up to a working "clustered ZFS" solution, and one more reliable than my ideas above ;) I

Re: [zfs-discuss] zvol space consumption vs ashift, metadata packing

2011-10-10 Thread Jim Klimov
aiting for a chance to write several metadata blocks as well... Thus I think my second solution is viable. //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-13 Thread Jim Klimov
f capturing storage from hosts which died, and avoiding corruptions - but this is hopefully solved in the past decades of clustering tech's. Nico also confirmed that "one node has to be a master of all TXGs" - which is conveyed in both ideas of my original post. More directed replies

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-14 Thread Jim Klimov
2011-10-14 15:53, Edward Ned Harvey пишет: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov I guess Richard was correct about the usecase description - I should detail what I'm thinking about, to give some illustration.

[zfs-discuss] Thumper (X4500), and CF SSD for L2ARC = ?

2011-10-14 Thread Jim Klimov
Hello, I was asked if the CF port in Thumpers can be accessed by the OS? In particular, would it be a good idea to use a modern 600x CF card (some reliable one intended for professional photography) as an L2ARC device using this port? Thanks, //Jim

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-14 Thread Jim Klimov
n, just as it was accessible to the "old host". Again. NFS/iscsi/IB = ok. True, except that this is not an optimal solution in this described usecase - a farm of server blades with a relatively dumb fast raw storage (but NOT an intellectual storage server)

Re: [zfs-discuss] Thumper (X4500), and CF SSD for L2ARC = ?

2011-10-14 Thread Jim Klimov
ordan On Fri, Oct 14, 2011 at 5:39 AM, Jim Klimov wrote: Hello, I was asked if the CF port in Thumpers can be accessed by the OS? In particular, would it be a good idea to use a modern 600x CF card (some reliable one intended for professional photography) as an L2ARC device using this port? T

Re: [zfs-discuss] Thumper (X4500), and CF SSD for L2ARC = ?

2011-10-14 Thread Jim Klimov
2011-10-14 23:57, Gregory Shaw пишет: You might want to keep in mind that the X4500 was a ~2006 box, and had only PCI-X slots. Or, at least, that's what the 3 Iv'e got have. I think the X4540 had PCIe, but I never got one of those. :-( I haven't seen any cache accelerator PCI-X cards. Howe

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-15 Thread Jim Klimov
ical consumer disks did get about 2-3 times faster for linear RW speeds over the past decade; but for random access they do still lag a lot. So, "agreed" ;) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-15 Thread Jim Klimov
t ports of two managed switch modules can also become the networking core for the deployment site. Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] All (pure) SSD pool rehash

2011-10-16 Thread Jim Klimov
have been reported several times. I think another rationale for SSD throttling was with L2ARC tasks - to reduce probable effects of write overdriving in SSD hardwares (less efficient and more wear on SSD cells). //Jim ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Jim Klimov
of like send-recv in the same pool? Why is it not done yet? ;) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] repair [was: about btrfs and zfs]

2011-10-19 Thread Jim Klimov
CDROM spin up - by a characteristic buzz in the headphones or on the loudspeakers. Whether other components would fail or not under such EMI - that depends. //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] repair [was: about btrfs and zfs]

2011-10-19 Thread Jim Klimov
oxyde film is scratched off, and the cable works again, for a few months more... //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Growing CKSUM errors with no READ/WRITE errors

2011-10-19 Thread Jim Klimov
| | Климов Евгений, Jim Klimov | | технический директор CTO | | ЗАО "ЦОС и ВТ" JSC COS&HT | || | +7-903-7705859 (cel

Re: [zfs-discuss] commercial zfs-based storage replication software?

2011-10-19 Thread Jim Klimov
e to work in Sol10 with little effort. HTH, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] bootadm hang WAS tuning zfs_arc_min

2011-10-19 Thread Jim Klimov
t the repair shell in order to continue booting the OS. * brute force - updating the bootarchive (/platform/i86pc/boot_archive and /platform/i86pc/amd64/boot_archive ) manually as an FS image, with files listed in /boot/solaris/filelist.ramdisk. Usually failure on boot

Re: [zfs-discuss] commercial zfs-based storage replication software?

2011-10-19 Thread Jim Klimov
2011-10-19 17:54, Fajar A. Nugraha пишет: On Wed, Oct 19, 2011 at 7:52 PM, Jim Klimov wrote: Well, just for the sake of completeness: most of our systems are using zfs-auto-snap service, including Solaris 10 systems datiing from Sol10u6. Installation of relevant packages from SXCE (ranging

Re: [zfs-discuss] Alternatives to NFS for sharing ZFS

2011-10-24 Thread Jim Klimov
w that's doable ;) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] (Incremental) ZFS SEND at sub-snapshot level

2011-10-29 Thread Jim Klimov
th the reverse of "zfs destroy @snapshot", meaning that some existing blocks would be reassigned as "owned" by a newly embedded snapshot instead of being "owned" by the live dataset or some more recent snapshot... //Jim __

Re: [zfs-discuss] (Incremental) ZFS SEND at sub-snapshot level

2011-10-30 Thread Jim Klimov
2011-10-30 2:14, Edward Ned Harvey пишет: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov summer, and came up with a new question. In short, is it possible to add "restartability" to ZFS SEND, for example Rather tha

Re: [zfs-discuss] (Incremental) ZFS SEND at sub-snapshot level

2011-10-30 Thread Jim Klimov
2011-10-29 21:57, Jim Klimov пишет: ... In short, is it possible to add "restartability" to ZFS SEND, for example by adding artificial snapshots (of configurable increment size) into already existing datasets [too large to be zfs-sent successfully as one chunk of stream data]? On a

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-10-30 Thread Jim Klimov
ond in kernel probes, the watchdog program might not catch the problem soon enough to react. http://thumper.cos.ru/~jim/freeram-watchdog-20110610-v0.11.tgz Note that it WILL crash your system in case of RAM depletion, without syncs or service shutdowns. Since the RAM depletion happens quickly, it mi

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-10-30 Thread Jim Klimov
2011-10-31 1:13, Jim Klimov пишет: Sorry, I am late. ... If my memory and GoogleCache don't fail me too much, I ended up with the following incantations for pool-import attempts: :; echo zfs_vdev_max_pending/W0t5 | mdb -kw :; echo "aok/W 1" | mdb -kw :; echo "zfs_re

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-10-31 Thread Jim Klimov
2011-10-31 16:28, Paul Kraus wrote: How big is / was the snapshot and dataset ? I am dealing with a 7 TB dataset and a 2.5 TB snapshot on a system with 32 GB RAM. I had a smaller-scale problem, with datasets and snapshots sized several hundred GB, but on an 8Gb RAM system. So proportionall

Re: [zfs-discuss] (OT) forums and email

2011-11-02 Thread Jim Klimov
they WERE still a useful reference for many of us, even if posted a few years back... //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-05 Thread Jim Klimov
0 -22K - pool/export/distr@zfs-auto-snap:frequent-2011-11-05-17:00 0 - 4.81G - pool/export/home@zfs-auto-snap:frequent-2011-11-05-17:00 0 - 396M - pool/export/home/jim@zfs-auto-snap:frequent-2011-11-05-17:00 0 - 24.7M - If you only need filesystem

[zfs-discuss] Couple of questions about ZFS on laptops

2011-11-08 Thread Jim Klimov
ferred (and for what reason)? Also, how do other list readers place and solve their preferences with their OpenSolaris-based laptops? ;) Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Single-disk rpool with inconsistent checksums, import fails

2011-11-08 Thread Jim Klimov
0t1 | mdb -kw In this case I am not very hesitant to recreate the rpool and reinstall the OS - it was mostly needed to server the separate data pool. However this option is not always an acceptable one, so I wonder if anything can be done to repair an inconsistent non-redundant pool - at

Re: [zfs-discuss] Single-disk rpool with inconsistent checksums, import fails

2011-11-08 Thread Jim Klimov
2011-11-08 22:30, Jim Klimov wrote: Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Thanks to my wife's sister, who is my hands and eyes near the problematic PC, h

Re: [zfs-discuss] Couple of questions about ZFS on laptops

2011-11-08 Thread Jim Klimov
2011-11-08 23:36, Bob Friesenhahn wrote: On Tue, 8 Nov 2011, Jim Klimov wrote: Second question regards single-HDD reliability: I can do ZFS mirroring over two partitions/slices, or I can configure "copies=2" for the datasets. Either way I think I can get protection from bad blocks o

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-11-08 Thread Jim Klimov
pool with both nodes accessing all of the data instantly and cleanly. Can this be true? ;) If this is not a deeply-kept trade secret, can the Nexenta people elaborate in technical terms how this cluster works? [1] http://www.nexenta.com/corp/sbb?gclid=CIzBg-aEqKwCFUK9zAodCSscsA

[zfs-discuss] "zfs hold" and "zfs send" on a readonly pool

2011-11-18 Thread Jim Klimov
d. I should've at least reported it ;) Thanks for any ideas, and good luck fixing it for the future ,) //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-22 Thread Jim Klimov
2011-11-22 10:24, Frank Cusack пишет: On Mon, Nov 21, 2011 at 10:06 PM, Frank Cusack mailto:fr...@linetwo.net>> wrote: grub does need to have an idea of the device path, maybe in vbox it's seen as the 3rd disk (c0t2), so the boot device name written to grub.conf is "disk3" (whatever

[zfs-discuss] SUMMARY: mounting datasets from a read-only pool with aid of tmpfs

2011-11-22 Thread Jim Klimov
this: # zfs snapshot -r pool/rpool-backup@2019-05 # zfs send -R pool/rpool-backup@2019-05 | zfs recv -vF rpool Since the hardware was all the same, there was little else to do. I revised "RPOOL/rpool/boot/grub/menu.lst" and "RPOOL/etc/vfstab" just in case,

Re: [zfs-discuss] SUMMARY: mounting datasets from a read-only pool with aid of tmpfs

2011-11-22 Thread Jim Klimov
nts and /etc/vfstab for that now on some systems, but would like to aviod such complication if possible... //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Compression

2011-11-22 Thread Jim Klimov
size, or the compressed filesize? My gut tells me that since they inflated _so_ badly when I storage vmotioned them, that they are the compressed values, but I would love to know for sure. -Matt Breitbach HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Compression

2011-11-22 Thread Jim Klimov
sk anyway. However, the original question was about VM datastores, so large files were assumed. //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Confusing zfs error message

2011-11-28 Thread Jim Klimov
ounds reasonable due to practice. If so, the error message "as is" happens to be valid. But you're correct that it might be more informative for this corner case as well... :) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-11-30 Thread Jim Klimov
ated !- referred); can that be better diagnosed or repaired? Can this discrepancy by a few sectors worth of size be a cause or be caused by that reported metadata error? Thanks, // Jim Klimov sent from a mobile, pardon any typos ,) ___ zfs-dis

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-02 Thread Jim Klimov
An intermediate update to my recent post: 2011-11-30 21:01, Jim Klimov wrote: Hello experts, I've finally upgraded my troublesome oi-148a home storage box to oi-151a about a week ago (using pkg update method from the wiki page - i'm not certain if that repository is fixed at relea

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-02 Thread Jim Klimov
2011-12-02 18:25, Steve Gonczi пишет: Hi Jim, Try to run a "zdb -b poolname" .. This should report any leaked or double allocated blocks. (It may or may not run, it tends to run out of memory and crash on large datasets) I would be curious what zdb reports, and whether you are a

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-05 Thread Jim Klimov
me theories, suggestions or requests to dig up more clues - bring them on! ;) 2011-12-02 20:08, Nigel W wrote: On Fri, Dec 2, 2011 at 02:58, Jim Klimov wrote: My question still stands: is it possible to recover from this error or somehow safely ignore it? ;) I mean, without backing up data and

[zfs-discuss] zdb failures and reported errors

2011-12-06 Thread Jim Klimov
50 5 block traversal size 11986202624 != alloc 11986203136 (unreachable 512) bp count: 405927 bp logical:15030449664 avg: 37027 bp physical: 12995855872 avg: 32015 compression: 1.16 bp allocated: 13172434944 avg: 32450

Re: [zfs-discuss] LSI 3GB HBA SAS Errors (and other misc)

2011-12-06 Thread Jim Klimov
, the electrical links just stopped working after a while due to oxydization into the bulk of the metal blobs :) Still, congratulations on that replacement hardware did solve the problem! ;) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-07 Thread Jim Klimov
space. * (Technically, for very-often referenced blocks there is a number of copies, controlled by ditto attribute). HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-11 Thread Jim Klimov
ith no disk IO. And I would be very surprised if speeds would be noticeably different ;) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-12 Thread Jim Klimov
2011-12-12 19:03, Pawel Jakub Dawidek пишет: On Sun, Dec 11, 2011 at 04:04:37PM +0400, Jim Klimov wrote: I would not be surprised to see that there is some disk IO adding delays for the second case (read of a deduped file "clone"), because you still have to determine references to t

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-17 Thread Jim Klimov
parent_type = raidz zio_err = 50 zio_offset = 0x6ecb163000 zio_size = 0x8000 zio_objset = 0x0 zio_object = 0x0 zio_level = 0 zio_blkid = 0x0 __ttl = 0x1 __tod = 0x4ed70849 0x1a17d120 2011-12-02 13:58, Jim Klimov wrote: An interme

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checksum errors now...

2011-12-18 Thread Jim Klimov
deep metadata error. Now, can someone else please confirm this guess? If I were to just calculate the correct checksum and overwrite the on-disk version of the block with "correc" one, would I likely make matters worse or okay? ;) Thanks to all that have already repl

Re: [zfs-discuss] very slow write performance on 151a

2011-12-19 Thread Jim Klimov
bad, * recommended return to 4Kb, we'll do 4*8K) * greatly increases write speed in filled-up pools set zfs:metaslab_min_alloc_size = 0x8000 set zfs:metaslab_smo_bonus_pct = 0xc8 ** These values were described in greater detail on the list this summer, I think. HTH,

Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Jim Klimov
.. Basically this should be equivalent for "root-reserved 5%" on traditional FSes like UFS, EXT3, etc. Would it be indeed? Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   4   5   6   7   8   >