Re: [zfs-discuss] CR# 6574286, remove slog device

2010-05-10 Thread Moshe Vainer
Did the fix for 6733267 make it to 134a (2010.05)? It isn't marked fixed, and i couldn't find it anywhere in the changelogs. Does that mean we'll have to wait for 2010.11 (or whatever v+2 is named)? Thanks, Moshe -- This message posted from opensolaris.org __

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread Moshe Vainer
Any news on this bug? We are trying to implement write acceleration, but can't deploy to production with this issue still not fixed. If anyone has an estimate (e.g., would it be part of 10.02?) i would very much appreciate to know. Thanks, Moshe -- This message posted from opensolaris.org

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread Moshe Vainer
I am sorry, i think i confused the matters a bit. I meant the bug that prevents importing with slog device missing, 6733267. I am aware that one can remove a slog device, but if you lose your rpool and the device goes missing while you rebuild, you will lose your pool in its entirety. Not a situ

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread Moshe Vainer
I was responding to this: "Now I have an exported file system that I cant import because of the log device but the disks are all there. Except the original log device which failed." Which actually means bug #6733267, not the one about slog removal. You can remove now (b125) but only if the poo

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-12-01 Thread Moshe Vainer
Thanks Pablo. I think I confused the matters - i meant to respond to the issue in bug #6733267, and somehow landed on that one... -Original Message- From: Pablo Méndez Hernández [mailto:pabl...@gmail.com] Sent: Monday, November 30, 2009 12:35 PM To: Moshe Vainer Cc: zfs-discuss

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-12-01 Thread Moshe Vainer
George, thank you very much! This is great news. -Original Message- From: george.wil...@sun.com [mailto:george.wil...@sun.com] Sent: Monday, November 30, 2009 9:04 PM To: Moshe Vainer Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] CR# 6574286, remove slog device Moshe Vainer

Re: [zfs-discuss] CR# 6574286, remove slog device

2010-01-20 Thread Moshe Vainer
Hi George. Any news on this? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-21 Thread Moshe Vainer
We tried the new LSI controllers in our configuration trying to replace Areca 1680 controllers. The tests were done on 2009.06 Unlike the mpt drivers which were rock solid (but obviously do not support the new chips), the mr_sas was a complete disaster. (We got ours from LSI website). Timeouts,

Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-21 Thread Moshe Vainer
>PS: For data that you want to mostly archive, consider using Amazon >Web Services (AWS) S3 service. Right now there is no charge to push >data into the cloud and its $0.15/gigabyte to keep it there. Do a >quick (back of the napkin) calculation on what storage you can get for >$30/month and factor

Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-21 Thread Moshe Vainer
Vanilla 2009.06, mr_sas drivers from LSI website. To answer your other question - the mpt driver is very solid on 2009.06 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/ma

Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-22 Thread Moshe Vainer
eleases in production. From: Tim Cook [mailto:t...@cook.ms] Sent: Thursday, January 21, 2010 5:45 PM To: Moshe Vainer Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller On Thu, Jan 21, 2010 at 7:37 PM, Moshe Vainer mailto:mvai...@

Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-22 Thread Moshe Vainer
I thought i made it very clear - mr_sas drivers from LSI website. No intention to bash anything, just a user experience. Sorry if that was misunderstood. From: Tim Cook [mailto:t...@cook.ms] Sent: Thursday, January 21, 2010 6:07 PM To: Moshe Vainer Cc: zfs-discuss@opensolaris.org Subject: Re

Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-22 Thread Moshe Vainer
Those aren't sorry for any misunderstanding. They are from the LSI site - http://lsi.com/storage_home/products_home/internal_raid/megaraid_sas/6gb_s_value_line/sas9260-8i/index.html since 2009.06 didn't have them integrated yet. What i was trying to say is that the current release of OS is not

Re: [zfs-discuss] zpool status output confusing

2010-02-02 Thread Moshe Vainer
I beileve to have seen the same issue. Mine was documented as: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6843555 Areca did issue a fixed firmware, but i can't say whether that indeed was the end of it, since we didn't do a controlled disk mixing experiment since then. I did fi

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Moshe Vainer
Eric, is this answer by George wrong? http://opensolaris.org/jive/message.jspa?messageID=439187#439187 Are we to expect the fix soon or is there still no schedule? Thanks, Moshe -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zpool status output confusing

2010-02-17 Thread Moshe Vainer
I have another very weird one, looks like a reoccurance of the same issue but with the new firmware. We have the following disks: AVAILABLE DISK SELECTIONS: 0. c7t1d0 /p...@0,0/pci8086,3...@3/pci17d3,1...@0/d...@1,0 1. c7t1d1 /p...@0,0/pci8086,3...@3/pci17d3,1

Re: [zfs-discuss] zpool status output confusing

2010-02-17 Thread Moshe Vainer
The links look fine, and i am pretty sure (though not 100%) that this is related to the vdev id assignment. What i am not sure is whether this is still an areca firmware issue or opensolaris issue. ls -l /dev/dsk/c7t1d?p0 lrwxrwxrwx 1 root root 62 2010-02-08 17:43 /dev/dsk/c7t1d0p0 -> ../../d

Re: [zfs-discuss] Snapshot sizes

2010-03-01 Thread Moshe Vainer
Except none of these actually gives you the information about "send size". Used will give you unique bytes, so in my understanding it would change as you add more snapshots, and especially with dedup. Referenced is more or less equivalent to doing DU on the .zfs/snapshot directory. So basically,

Re: [zfs-discuss] zfs destroy hanging

2009-07-20 Thread Moshe Vainer
We have just got a hang like this. Here's the output of ps -ef | grep zfs: root 425 7 0 Jun 17 console 0:00 /usr/lib/saf/ttymon -g -d /dev/console -l console -m ldterm,ttcompat -h -p zfs0 root 22879 22876 0 18:18:37 ? 0:01 /usr/sbin/zfs rollback -r tank/aa root

Re: [zfs-discuss] zfs destroy hanging

2009-07-20 Thread Moshe Vainer
Forgot to mention - 1. this system was installed as 2008.11, so it should have no upgrade issues. 2. Not sure how to do the mdb -k on the dump, the only thing it produced is the following: > ::status debugging live kernel (64-bit) on zfs05 operating system: 5.11 snv_101b (i86pc) > $C > -- This m

Re: [zfs-discuss] zfs destroy hanging

2009-07-20 Thread Moshe Vainer
Ok, sorry for spamming - got some more info from mdb -k devu...@zfs05:/var/crash/zfs05# mdb -k unix.0 vmcore.0 mdb: failed to read panicbuf and panic_reg -- current register set will be unavailable Loading modules: [ unix genunix specfs dtrace cpu.generic uppc pcplusmp scsi_vhci zfs sd ip hook n

Re: [zfs-discuss] zfs destroy hanging

2009-07-21 Thread Moshe Vainer
Some more info - the system won't shutdown, issuing shutdown -g0 -i5 just sits there doing nothing. Then i tried to find locks on the savecore i took, - mdb crashes: mdb -k ./unix.1 ./vmcore.1 mdb: failed to read panicbuf and panic_reg -- current register set will be unavailable Loading modules

Re: [zfs-discuss] zfs destroy hanging

2009-07-21 Thread Moshe Vainer
And pstack won't give stack on bootadm process: devu...@zfs05:/var/crash/zfs05# pstack 23870 23870: /sbin/bootadm -a update_all devu...@zfs05:/var/crash/zfs05# pstack -F 23870 23870: /sbin/bootadm -a update_all devu...@zfs05:/var/crash/zfs05# kill -9 23870 devu...@zfs05:/var/crash/zfs05# kill -9