30.11.2019 15:48, Eugene Grosbein wrote:
> Hi!
>
> I have RAIDZ1 with five GELI-encrypted SSDs da[2-6].eli (non-boot pool).
>
> I've exported the pool, destroyed da2.eli then successfully imported pool
> back in degraded state.
> Then I've mounted some file systems successfully but zfs mount fo
Hi!
I have RAIDZ1 with five GELI-encrypted SSDs da[2-6].eli (non-boot pool).
I've exported the pool, destroyed da2.eli then successfully imported pool back
in degraded state.
Then I've mounted some file systems successfully but zfs mount for next one
hung on [tx->tx_sync_done_cv]
for 4400 secon
On 11/18/2016 13:30, Andriy Gapon wrote:
On 14/11/2016 14:00, Henri Hennebert wrote:
On 11/14/2016 12:45, Andriy Gapon wrote:
Okay. Luckily for us, it seems that 'm' is available in frame 5. It also
happens to be the first field of 'struct faultstate'. So, could you please go
to frame and
On 14/11/2016 14:00, Henri Hennebert wrote:
> On 11/14/2016 12:45, Andriy Gapon wrote:
>> Okay. Luckily for us, it seems that 'm' is available in frame 5. It also
>> happens to be the first field of 'struct faultstate'. So, could you please
>> go
>> to frame and print '*m' and '*(struct faultst
On 11/14/2016 12:45, Andriy Gapon wrote:
On 14/11/2016 11:35, Henri Hennebert wrote:
On 11/14/2016 10:07, Andriy Gapon wrote:
Hmm, I've just noticed another interesting thread:
Thread 668 (Thread 101245):
#0 sched_switch (td=0xf800b642aa00, newtd=0xf8000285f000, flags=) at /usr/src
On 14/11/2016 11:35, Henri Hennebert wrote:
>
>
> On 11/14/2016 10:07, Andriy Gapon wrote:
>> Hmm, I've just noticed another interesting thread:
>> Thread 668 (Thread 101245):
>> #0 sched_switch (td=0xf800b642aa00, newtd=0xf8000285f000,
>> flags=> optimized out>) at /usr/src/sys/kern/sc
On 11/14/2016 10:07, Andriy Gapon wrote:
On 13/11/2016 15:28, Henri Hennebert wrote:
On 11/13/2016 11:06, Andriy Gapon wrote:
On 12/11/2016 14:40, Henri Hennebert wrote:
[snip]
Could you please show 'info local' in frame 14?
I expected that 'nd' variable would be defined there and it may
On 13/11/2016 15:28, Henri Hennebert wrote:
> On 11/13/2016 11:06, Andriy Gapon wrote:
>> On 12/11/2016 14:40, Henri Hennebert wrote:
>>> I attatch it
>>
>> Thank you!
>> So, these two threads are trying to get the lock in the exclusive mode:
>> Thread 687 (Thread 101243):
>> #0 sched_switch (td=0
On 11/13/2016 14:28, Henri Hennebert wrote:
This 2 threads are innd processes. In core.txt.4:
8 14789 29165 0 24 4 40040 6612 zfs DN- 0:00.00 [innd]
8 29165 1 0 20 0 42496 6888 select Ds- 0:01.33 [innd]
8 49778 29165 0 24 4 40040 6900 zfs
On 11/13/2016 11:06, Andriy Gapon wrote:
On 12/11/2016 14:40, Henri Hennebert wrote:
I attatch it
Thank you!
So, these two threads are trying to get the lock in the exclusive mode:
Thread 687 (Thread 101243):
#0 sched_switch (td=0xf800b642b500, newtd=0xf8000285ea00, flags=) at /usr/sr
On 12/11/2016 14:40, Henri Hennebert wrote:
> I attatch it
Thank you!
So, these two threads are trying to get the lock in the exclusive mode:
Thread 687 (Thread 101243):
#0 sched_switch (td=0xf800b642b500, newtd=0xf8000285ea00, flags=) at /usr/src/sys/kern/sched_ule.c:1973
#1 0x8
On 11/11/2016 16:50, Henri Hennebert wrote:
>
>
> On 11/11/2016 12:24, Andriy Gapon wrote:
>>
>> At this stage I would try to get a system crash dump for post-mortem
>> analysis.
>> There are a few way to do that. You can enter ddb and then run 'dump' and
>> 'reset' commands. Or you can just d
On 11/11/2016 12:24, Andriy Gapon wrote:
At this stage I would try to get a system crash dump for post-mortem analysis.
There are a few way to do that. You can enter ddb and then run 'dump' and
'reset' commands. Or you can just do `sysctl debug.kdb.panic=1`.
In either case, please double-che
On 10/11/2016 21:41, Henri Hennebert wrote:
> On 11/10/2016 19:40, Andriy Gapon wrote:
>> On 10/11/2016 19:55, Henri Hennebert wrote:
>>>
>>>
>>> On 11/10/2016 18:33, Andriy Gapon wrote:
On 10/11/2016 18:12, Henri Hennebert wrote:
> On 11/10/2016 16:54, Andriy Gapon wrote:
>> On 10/11/
On 11/10/2016 19:40, Andriy Gapon wrote:
On 10/11/2016 19:55, Henri Hennebert wrote:
On 11/10/2016 18:33, Andriy Gapon wrote:
On 10/11/2016 18:12, Henri Hennebert wrote:
On 11/10/2016 16:54, Andriy Gapon wrote:
On 10/11/2016 17:20, Henri Hennebert wrote:
On 11/10/2016 15:00, Andriy Gapon w
On 10/11/2016 19:55, Henri Hennebert wrote:
>
>
> On 11/10/2016 18:33, Andriy Gapon wrote:
>> On 10/11/2016 18:12, Henri Hennebert wrote:
>>> On 11/10/2016 16:54, Andriy Gapon wrote:
On 10/11/2016 17:20, Henri Hennebert wrote:
> On 11/10/2016 15:00, Andriy Gapon wrote:
>> Interesting
On 11/10/2016 18:33, Andriy Gapon wrote:
On 10/11/2016 18:12, Henri Hennebert wrote:
On 11/10/2016 16:54, Andriy Gapon wrote:
On 10/11/2016 17:20, Henri Hennebert wrote:
On 11/10/2016 15:00, Andriy Gapon wrote:
Interesting. I can not spot any suspicious thread that would hold the vnode
loc
On 10/11/2016 18:12, Henri Hennebert wrote:
> On 11/10/2016 16:54, Andriy Gapon wrote:
>> On 10/11/2016 17:20, Henri Hennebert wrote:
>>> On 11/10/2016 15:00, Andriy Gapon wrote:
Interesting. I can not spot any suspicious thread that would hold the
vnode
lock. Could you please run
On 11/10/2016 16:54, Andriy Gapon wrote:
On 10/11/2016 17:20, Henri Hennebert wrote:
On 11/10/2016 15:00, Andriy Gapon wrote:
Interesting. I can not spot any suspicious thread that would hold the vnode
lock. Could you please run kgdb (just like that, no arguments), then execute
'bt' command a
On 10/11/2016 17:20, Henri Hennebert wrote:
> On 11/10/2016 15:00, Andriy Gapon wrote:
>> Interesting. I can not spot any suspicious thread that would hold the vnode
>> lock. Could you please run kgdb (just like that, no arguments), then execute
>> 'bt' command and then select a frame when _vn_lo
On 11/10/2016 15:00, Andriy Gapon wrote:
On 10/11/2016 12:30, Henri Hennebert wrote:
On 11/10/2016 11:21, Andriy Gapon wrote:
On 09/11/2016 15:58, Eric van Gyzen wrote:
On 11/09/2016 07:48, Henri Hennebert wrote:
I encounter a strange deadlock on
FreeBSD avoriaz.restart.bel 11.0-RELEASE-p3 F
On 10/11/2016 12:30, Henri Hennebert wrote:
> On 11/10/2016 11:21, Andriy Gapon wrote:
>> On 09/11/2016 15:58, Eric van Gyzen wrote:
>>> On 11/09/2016 07:48, Henri Hennebert wrote:
I encounter a strange deadlock on
FreeBSD avoriaz.restart.bel 11.0-RELEASE-p3 FreeBSD 11.0-RELEASE-p3 #
On 11/10/2016 11:21, Andriy Gapon wrote:
On 09/11/2016 15:58, Eric van Gyzen wrote:
On 11/09/2016 07:48, Henri Hennebert wrote:
I encounter a strange deadlock on
FreeBSD avoriaz.restart.bel 11.0-RELEASE-p3 FreeBSD 11.0-RELEASE-p3 #0 r308260:
Fri Nov 4 02:51:33 CET 2016
r...@avoriaz.restart.be
On 09/11/2016 15:58, Eric van Gyzen wrote:
> On 11/09/2016 07:48, Henri Hennebert wrote:
>> I encounter a strange deadlock on
>>
>> FreeBSD avoriaz.restart.bel 11.0-RELEASE-p3 FreeBSD 11.0-RELEASE-p3 #0
>> r308260:
>> Fri Nov 4 02:51:33 CET 2016
>> r...@avoriaz.restart.bel:/usr/obj/usr/src/sys/AV
On 11/09/2016 19:23, Thierry Thomas wrote:
Le mer. 9 nov. 16 à 15:03:49 +0100, Henri Hennebert
écrivait :
[root@avoriaz ~]# procstat -kk 85656
PIDTID COMM TDNAME KSTACK
85656 101112 find -mi_switch+0xd2
sleepq_wait+0x3a sleeplk+0x1b4 __lockmgr_a
Le mer. 9 nov. 16 à 15:03:49 +0100, Henri Hennebert
écrivait :
> [root@avoriaz ~]# procstat -kk 85656
>PIDTID COMM TDNAME KSTACK
> 85656 101112 find -mi_switch+0xd2
> sleepq_wait+0x3a sleeplk+0x1b4 __lockmgr_args+0x356 vop_stdlock+0x3c
> VOP_LOC
On 11/09/2016 14:58, Eric van Gyzen wrote:
On 11/09/2016 07:48, Henri Hennebert wrote:
I encounter a strange deadlock on
FreeBSD avoriaz.restart.bel 11.0-RELEASE-p3 FreeBSD 11.0-RELEASE-p3 #0 r308260:
Fri Nov 4 02:51:33 CET 2016
r...@avoriaz.restart.bel:/usr/obj/usr/src/sys/AVORIAZ amd64
Thi
On 11/09/2016 07:48, Henri Hennebert wrote:
> I encounter a strange deadlock on
>
> FreeBSD avoriaz.restart.bel 11.0-RELEASE-p3 FreeBSD 11.0-RELEASE-p3 #0
> r308260:
> Fri Nov 4 02:51:33 CET 2016
> r...@avoriaz.restart.bel:/usr/obj/usr/src/sys/AVORIAZ amd64
>
> This system is exclusively runni
I encounter a strange deadlock on
FreeBSD avoriaz.restart.bel 11.0-RELEASE-p3 FreeBSD 11.0-RELEASE-p3 #0
r308260: Fri Nov 4 02:51:33 CET 2016
r...@avoriaz.restart.bel:/usr/obj/usr/src/sys/AVORIAZ amd64
This system is exclusively running on zfs.
After 3 or 4 days, `periodic daily` is locked
on 29/01/2013 05:21 Garrett Wollman said the following:
> When
> I restarted mountd, it hung waiting on rrl->rr_, but the system may
> already have been deadlocked at that point. procstat reported:
>
> 87678 104365 mountd -mi_switch sleepq_wait _cv_wait
> rrw_enter zfs_
I just had a big fileserver deadlock in an odd way. I was
investigating a user's problem, and decided for various reasons to
restart mountd. It had been complaining like this:
Jan 28 21:06:43 nfs-prod-1 mountd[1108]: can't delete exports for
/usr/local/.zfs/snapshot/monthly-2013-01: Invalid arg
On Sep 29, 2009, at 10:29 AM, Borja Marcos wrote:
I have observed a deadlock condition when using ZFS. We are making a
heavy usage of zfs send/zfs receive to keep a replica of a dataset
on a remote machine. It can be done at one minute intervals. Maybe
we're doing a somehow atypical usage
On Sep 29, 2009, at 10:29 AM, Borja Marcos wrote:
Hello,
I have observed a deadlock condition when using ZFS. We are making a
heavy usage of zfs send/zfs receive to keep a replica of a dataset
on a remote machine. It can be done at one minute intervals. Maybe
we're doing a somehow atypi
Hello,
I have observed a deadlock condition when using ZFS. We are making a
heavy usage of zfs send/zfs receive to keep a replica of a dataset on
a remote machine. It can be done at one minute intervals. Maybe we're
doing a somehow atypical usage of ZFS, but, well, seems to be a great
so
getting it into production (just tell me what to do ;) ).
- Mensaje original
De: Spike Ilacqua <[EMAIL PROTECTED]>
Para: Ender <[EMAIL PROTECTED]>
CC: [EMAIL PROTECTED]; freebsd-stable@freebsd.org; Johan Ström <[EMAIL
PROTECTED]>
Enviado: martes, 8 de abril, 2008 18:1
s, 8 de abril, 2008 18:13:32
Asunto: Re: ZFS deadlock
> Depending on your work load you are just buying more time, so
> "reasonable" is a matter of perspective. :( I didn't see if you said
> you are on 32bit or 64bit? Keep in mind the kmem max is 1.5-2G on amd64
>
Depending on your work load you are just buying more time, so
"reasonable" is a matter of perspective. :( I didn't see if you said
you are on 32bit or 64bit? Keep in mind the kmem max is 1.5-2G on amd64
regardless of how much memory you have. If 512M arcsize crashes too soon
for your tastes y
Spike Ilacqua wrote:
Depending on your work load you are just buying more time, so
"reasonable" is a matter of perspective. :( I didn't see if you said
you are on 32bit or 64bit? Keep in mind the kmem max is 1.5-2G on
amd64 regardless of how much memory you have. If 512M arcsize crashes
too
Johan Ström wrote:
On Apr 8, 2008, at 9:40 AM, LI Xin wrote:
For your question: just reboot would be fine, you may want to tune
your arc size (to be smaller) and kmem space (to be larger), which
would reduce the chance that this would happen, or eliminate it,
depending on your workload.
Bac
On Apr 8, 2008, at 9:40 AM, LI Xin wrote:
For your question: just reboot would be fine, you may want to tune
your arc size (to be smaller) and kmem space (to be larger), which
would reduce the chance that this would happen, or eliminate it,
depending on your workload.
Back online now, wit
On Apr 8, 2008, at 9:37 AM, LI Xin wrote:
Johan Ström wrote:
Hello
A box of mine running RELENG_7_0 and ZFS over a couple of disks (6
disks, 3 mirrors) seems to have gotten stuck. From Ctrl-T:
load: 0.50 cmd: zsh 40188
[zfs:&buf_hash_table.ht_locks[i].ht_lock] 0.02u 0.04s 0% 3404k
load: 0.
For your question: just reboot would be fine, you may want to tune your
arc size (to be smaller) and kmem space (to be larger), which would
reduce the chance that this would happen, or eliminate it, depending on
your workload.
This situation is not recoverable and you can trust ZFS that you wi
On Apr 8, 2008, at 9:32 AM, Jeremy Chadwick wrote:
On Tue, Apr 08, 2008 at 08:17:38AM +0200, Johan Ström wrote:
Hello
A box of mine running RELENG_7_0 and ZFS over a couple of disks (6
disks, 3
mirrors) seems to have gotten stuck. From Ctrl-T:
load: 0.50 cmd: zsh 40188
[zfs:&buf_hash_ta
Johan Ström wrote:
Hello
A box of mine running RELENG_7_0 and ZFS over a couple of disks (6
disks, 3 mirrors) seems to have gotten stuck. From Ctrl-T:
load: 0.50 cmd: zsh 40188 [zfs:&buf_hash_table.ht_locks[i].ht_lock]
0.02u 0.04s 0% 3404k
load: 0.43 cmd: zsh 40188 [zfs:&buf_hash_table.ht_
On Tue, Apr 08, 2008 at 08:17:38AM +0200, Johan Ström wrote:
> Hello
>
> A box of mine running RELENG_7_0 and ZFS over a couple of disks (6 disks, 3
> mirrors) seems to have gotten stuck. From Ctrl-T:
>
> load: 0.50 cmd: zsh 40188 [zfs:&buf_hash_table.ht_locks[i].ht_lock] 0.02u
> 0.04s 0% 3404k
Hello
A box of mine running RELENG_7_0 and ZFS over a couple of disks (6
disks, 3 mirrors) seems to have gotten stuck. From Ctrl-T:
load: 0.50 cmd: zsh 40188 [zfs:&buf_hash_table.ht_locks[i].ht_lock]
0.02u 0.04s 0% 3404k
load: 0.43 cmd: zsh 40188 [zfs:&buf_hash_table.ht_locks[i].ht_lock]
Henri Hennebert wrote:
Pawel Jakub Dawidek wrote:
On Sat, Nov 10, 2007 at 12:39:27PM +0100, Henri Hennebert wrote:
Pawel Jakub Dawidek wrote:
On Fri, Nov 09, 2007 at 05:37:00PM +0100, Henri Hennebert wrote:
hello
To push zfs, I launch 2 scrub at the same time, after ~20 seconds
the system f
Pawel Jakub Dawidek wrote:
On Sat, Nov 10, 2007 at 12:39:27PM +0100, Henri Hennebert wrote:
Pawel Jakub Dawidek wrote:
On Fri, Nov 09, 2007 at 05:37:00PM +0100, Henri Hennebert wrote:
hello
To push zfs, I launch 2 scrub at the same time, after ~20 seconds the
system freeze:
[...]
I found a
Pawel Jakub Dawidek wrote:
On Sat, Nov 10, 2007 at 12:39:27PM +0100, Henri Hennebert wrote:
Pawel Jakub Dawidek wrote:
On Fri, Nov 09, 2007 at 05:37:00PM +0100, Henri Hennebert wrote:
hello
To push zfs, I launch 2 scrub at the same time, after ~20 seconds the
system freeze:
[...]
I found a
On Sat, Nov 10, 2007 at 12:39:27PM +0100, Henri Hennebert wrote:
> Pawel Jakub Dawidek wrote:
> >On Fri, Nov 09, 2007 at 05:37:00PM +0100, Henri Hennebert wrote:
> >>hello
> >>
> >>To push zfs, I launch 2 scrub at the same time, after ~20 seconds the
> >>system freeze:
> >[...]
> >
> >I found a de
Pawel Jakub Dawidek wrote:
On Fri, Nov 09, 2007 at 05:37:00PM +0100, Henri Hennebert wrote:
hello
To push zfs, I launch 2 scrub at the same time, after ~20 seconds the
system freeze:
[...]
I found a deadlock too. If it's reproducable for you, can you try this
patch:
I reproduce it after 30
On Fri, Nov 09, 2007 at 05:37:00PM +0100, Henri Hennebert wrote:
> hello
>
> To push zfs, I launch 2 scrub at the same time, after ~20 seconds the
> system freeze:
[...]
I found a deadlock too. If it's reproducable for you, can you try this
patch:
http://people.freebsd.org/~pjd/patches/
On Fri, Nov 09, 2007 at 11:28:27PM +0100, Henri Hennebert wrote:
Henri,
> >See: echo "sleep 1" && time sleep 2 && echo "sleep 2" && time sleep 2
> >and: ls -l /notfound && echo yes
>
> Per the man page, zpool scrub *begin* a scrub witch go on in background,
> so two scrubs are running simustane
On Fri, Nov 09, 2007 at 11:28:27PM +0100, Henri Hennebert wrote:
> Richard Arends wrote:
> >On Fri, Nov 09, 2007 at 09:35:59PM +0100, Henri Hennebert wrote:
> >
> >>>This won't start the scrubs at the same time, but after each other. And
> >>>the second will only start if the first one not fails (e
Richard Arends wrote:
On Fri, Nov 09, 2007 at 09:35:59PM +0100, Henri Hennebert wrote:
This won't start the scrubs at the same time, but after each other. And
the second will only start if the first one not fails (exitcode == 0)
Not at all, the scrub is asynchronious, I'm sure of it
Running
On Fri, Nov 09, 2007 at 09:35:59PM +0100, Henri Hennebert wrote:
> >This won't start the scrubs at the same time, but after each other. And
> >the second will only start if the first one not fails (exitcode == 0)
> >
> Not at all, the scrub is asynchronious, I'm sure of it
Running 2 commands sepe
Richard Arends wrote:
On Fri, Nov 09, 2007 at 05:37:00PM +0100, Henri Hennebert wrote:
Henri,
To push zfs, I launch 2 scrub at the same time, after ~20 seconds the
system freeze:
zpool scrub pool0 && zpool scrub pool2
This won't start the scrubs at the same time, but after each other. And
On Fri, Nov 09, 2007 at 05:37:00PM +0100, Henri Hennebert wrote:
Henri,
> To push zfs, I launch 2 scrub at the same time, after ~20 seconds the
> system freeze:
>
> zpool scrub pool0 && zpool scrub pool2
This won't start the scrubs at the same time, but after each other. And
the second will on
hello
To push zfs, I launch 2 scrub at the same time, after ~20 seconds the
system freeze:
zpool scrub pool0 && zpool scrub pool2
My pools:
zpool status
pool: pool0
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
pool0 ONLINE
59 matches
Mail list logo