I'm really angry against ZFS:
My server no more reboots because the ZFS spacemap is again corrupt.
I just replaced the whole spacemap by recreating a new zpool from scratch and
copying back the data with "zfs send & zfs receive".
Did it copied corrupt spacemap?!
For me its now terminated. I loss
For the developpers, this is my error message:
Sep 18 09:43:56 global genunix: [ID 361072 kern.warning] WARNING: zfs: freeing
free segment (offset=379733483520 size=4096)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
On 09/18/10 06:47 PM, Carsten Aulbert wrote:
Hi all
one of our system just developed something remotely similar:
s06:~# zpool status
pool: atlashome
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degra
Hi
On Saturday 18 September 2010 10:02:42 Ian Collins wrote:
>
> I see this all the time on a troublesome Thumper. I believe this
> happens because the data in the pool is continuously changing.
Ah ok, that may be, there is one particular active user on this box right now.
Interesting I've nev
On 09/18/10 08:58 PM, Carsten Aulbert wrote:
Hi
On Saturday 18 September 2010 10:02:42 Ian Collins wrote:
I see this all the time on a troublesome Thumper. I believe this
happens because the data in the pool is continuously changing.
Ah ok, that may be, there is one particular activ
On 18/09/10 09:02, Ian Collins wrote:
On 09/18/10 06:47 PM, Carsten Aulbert wrote:
Has someone an idea how it is possible to resilver 678G of data on a 500G
drive?
I see this all the time on a troublesome Thumper. I believe this happens
because the data in the pool is continuously changing.
On Sat, Sep 18, 2010 at 7:01 PM, Tom Bird wrote:
> All said and done though, we will have to live with snv_134's bugs from now
> on, or perhaps I could try Sol 10.
>
or OpenIllumos. Or Nexenta. Or FreeBSD. Or .
--
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
_
On 18/09/10 13:06, Edho P Arief wrote:
On Sat, Sep 18, 2010 at 7:01 PM, Tom Bird wrote:
All said and done though, we will have to live with snv_134's bugs from now
on, or perhaps I could try Sol 10.
or OpenIllumos. Or Nexenta. Or FreeBSD. Or.
... none of which will receive ZFS code updates
But all of which have newer code, today, than onnv-134.
On 18 September 2010 22:20, Tom Bird wrote:
> On 18/09/10 13:06, Edho P Arief wrote:
>
>> On Sat, Sep 18, 2010 at 7:01 PM, Tom Bird wrote:
>>
>>> All said and done though, we will have to live with snv_134's bugs from
>>> now
>>> on, or pe
Tom Bird wrote:
On 18/09/10 09:02, Ian Collins wrote:
In my case, other than an hourly snapshot, the data is not significantly
changing.
It'd be nice to see a response other than "you're doing it wrong",
rebuilding 5x the data on a drive relative to its capacity is clearly
erratic behaviou
I have another dataset similar to the one i cannot get, if i do:
zdb - dataset_good 7
-
Dataset store/nfs/ICLOS/prod/mail-ginjus [ZPL], ID 2119, cr_txg 5736, 9.14G, 5
objects,
rootbp DVA[0]=<0:276a891800
Hello,
I have a question about ZFS-8000-8A and block volumes.
I have 2 mirror sets in one zpool.
Build 134 amd64 (upgraded since it was released from 2009.06)
Pool version is still 13
pool: data
state: DEGRADED
status: One or more devices has experienced an error resulting in data
co
On Sat, 18 Sep 2010, Heinrich wrote:
I SAN boot my systems and this block volume is a windows install,
windows boot and run fine. it does however indicate that the disk
has a bad block in event viewer. I have been running this setup
since build 99 and boot CentOS, win2k8 and Vista/7 from it.
Is there a way to fsck the spacemap?
Does scrub helps for this?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This is new for me:
$ zpool status
pool: rpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
us
On 09/19/10 12:01 AM, Tom Bird wrote:
On 18/09/10 09:02, Ian Collins wrote:
On 09/18/10 06:47 PM, Carsten Aulbert wrote:
Has someone an idea how it is possible to resilver 678G of data on a
500G
drive?
I see this all the time on a troublesome Thumper. I believe this happens
because the dat
On 09/19/10 08:11 AM, Stephan Ferraro wrote:
This is new for me:
$ zpool status
pool: rpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device nee
Hi Folks,
i had an ZFS Pool running on an Hard-Raid5 Controller. I had play with the
Vendor Maintenance Tool an now my Zpool are in trouble: ZFS-8000-EY
r...@opensolaris:~# zpool import
pool: data
id: 15260857908242801044
state: FAULTED
status: The pool was last accessed by another system
so, i forgot the rest of information:
ZFS v3
Zpool v15
Hard-Raid5 with CPQary3
the Pool was create in an early release and update to v15 with FreeBSD 8.1
AMD64, the FreeBSD Forum give me the tip to install OpenSolaris svn_134. Now
its running ;-)
regards ré
--
This message posted from opensol
19 matches
Mail list logo