Did the fix for 6733267 make it to 134a (2010.05)? It isn't marked fixed, and i
couldn't find it anywhere in the changelogs. Does that mean we'll have to wait
for 2010.11 (or whatever v+2 is named)?
Thanks,
Moshe
--
This message posted from opensolaris.org
__
Any news on this bug? We are trying to implement write acceleration, but can't
deploy to production with this issue still not fixed. If anyone has an estimate
(e.g., would it be part of 10.02?) i would very much appreciate to know.
Thanks,
Moshe
--
This message posted from opensolaris.org
I am sorry, i think i confused the matters a bit. I meant the bug that prevents
importing with slog device missing, 6733267.
I am aware that one can remove a slog device, but if you lose your rpool and
the device goes missing while you rebuild, you will lose your pool in its
entirety. Not a situ
I was responding to this:
"Now I have an exported file system that I cant import because of the log
device but the disks are all there. Except the original log device which
failed."
Which actually means bug #6733267, not the one about slog removal. You can
remove now (b125) but only if the poo
Thanks Pablo. I think I confused the matters - i meant to respond to the issue
in bug #6733267, and somehow landed on that one...
-Original Message-
From: Pablo Méndez Hernández [mailto:pabl...@gmail.com]
Sent: Monday, November 30, 2009 12:35 PM
To: Moshe Vainer
Cc: zfs-discuss
George, thank you very much! This is great news.
-Original Message-
From: george.wil...@sun.com [mailto:george.wil...@sun.com]
Sent: Monday, November 30, 2009 9:04 PM
To: Moshe Vainer
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] CR# 6574286, remove slog device
Moshe Vainer
Hi George.
Any news on this?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We tried the new LSI controllers in our configuration trying to replace Areca
1680 controllers. The tests were done on 2009.06
Unlike the mpt drivers which were rock solid (but obviously do not support the
new chips), the mr_sas was a complete disaster. (We got ours from LSI website).
Timeouts,
>PS: For data that you want to mostly archive, consider using Amazon
>Web Services (AWS) S3 service. Right now there is no charge to push
>data into the cloud and its $0.15/gigabyte to keep it there. Do a
>quick (back of the napkin) calculation on what storage you can get for
>$30/month and factor
Vanilla 2009.06, mr_sas drivers from LSI website.
To answer your other question - the mpt driver is very solid on 2009.06
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
eleases in production.
From: Tim Cook [mailto:t...@cook.ms]
Sent: Thursday, January 21, 2010 5:45 PM
To: Moshe Vainer
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008
controller
On Thu, Jan 21, 2010 at 7:37 PM, Moshe Vainer
mailto:mvai...@
I thought i made it very clear - mr_sas drivers from LSI website. No intention
to bash anything, just a user experience. Sorry if that was misunderstood.
From: Tim Cook [mailto:t...@cook.ms]
Sent: Thursday, January 21, 2010 6:07 PM
To: Moshe Vainer
Cc: zfs-discuss@opensolaris.org
Subject: Re
Those aren't sorry for any misunderstanding. They are from the LSI site -
http://lsi.com/storage_home/products_home/internal_raid/megaraid_sas/6gb_s_value_line/sas9260-8i/index.html
since 2009.06 didn't have them integrated yet. What i was trying to say is that
the current release of OS is not
I beileve to have seen the same issue. Mine was documented as:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6843555
Areca did issue a fixed firmware, but i can't say whether that indeed was the
end of it, since we didn't do a controlled disk mixing experiment since then.
I did fi
Eric, is this answer by George wrong?
http://opensolaris.org/jive/message.jspa?messageID=439187#439187
Are we to expect the fix soon or is there still no schedule?
Thanks,
Moshe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I have another very weird one, looks like a reoccurance of the same issue but
with the new firmware.
We have the following disks:
AVAILABLE DISK SELECTIONS:
0. c7t1d0
/p...@0,0/pci8086,3...@3/pci17d3,1...@0/d...@1,0
1. c7t1d1
/p...@0,0/pci8086,3...@3/pci17d3,1
The links look fine, and i am pretty sure (though not 100%) that this is
related to the vdev id assignment. What i am not sure is whether this is still
an areca firmware issue or opensolaris issue.
ls -l /dev/dsk/c7t1d?p0
lrwxrwxrwx 1 root root 62 2010-02-08 17:43 /dev/dsk/c7t1d0p0 ->
../../d
Except none of these actually gives you the information about "send size".
Used will give you unique bytes, so in my understanding it would change as you
add more snapshots, and especially with dedup.
Referenced is more or less equivalent to doing DU on the .zfs/snapshot
directory.
So basically,
We have just got a hang like this.
Here's the output of ps -ef | grep zfs:
root 425 7 0 Jun 17 console 0:00 /usr/lib/saf/ttymon -g -d
/dev/console -l console -m ldterm,ttcompat -h -p zfs0
root 22879 22876 0 18:18:37 ? 0:01 /usr/sbin/zfs rollback -r
tank/aa
root
Forgot to mention - 1. this system was installed as 2008.11, so it should have
no upgrade issues.
2. Not sure how to do the mdb -k on the dump, the only thing it produced is the
following:
> ::status
debugging live kernel (64-bit) on zfs05
operating system: 5.11 snv_101b (i86pc)
> $C
>
--
This m
Ok, sorry for spamming - got some more info from mdb -k
devu...@zfs05:/var/crash/zfs05# mdb -k unix.0 vmcore.0
mdb: failed to read panicbuf and panic_reg -- current register set will be
unavailable
Loading modules: [ unix genunix specfs dtrace cpu.generic uppc pcplusmp
scsi_vhci zfs sd ip hook n
Some more info - the system won't shutdown, issuing shutdown -g0 -i5 just sits
there doing nothing.
Then i tried to find locks on the savecore i took, - mdb crashes:
mdb -k ./unix.1 ./vmcore.1
mdb: failed to read panicbuf and panic_reg -- current register set will be
unavailable
Loading modules
And pstack won't give stack on bootadm process:
devu...@zfs05:/var/crash/zfs05# pstack 23870
23870: /sbin/bootadm -a update_all
devu...@zfs05:/var/crash/zfs05# pstack -F 23870
23870: /sbin/bootadm -a update_all
devu...@zfs05:/var/crash/zfs05# kill -9 23870
devu...@zfs05:/var/crash/zfs05# kill -9
23 matches
Mail list logo