Problem with libjpeg when updating packages

2016-07-21 Thread Jörg Preiß
Hi all,

I just updated from 10.3 to stable. This worked fine, except when pkg
install updated the packages. There was a conflict within the libjpeg
package (sorry, I did not write down the exact message).

Now I don't know how to get slim working again, because libjpeg.so.8 is
not found.

So, maybe there is still an error in some update scripts or something
like that?




signature.asc
Description: OpenPGP digital signature


Re: Panic on BETA1 in the ZFS subsystem

2016-07-21 Thread Andriy Gapon
On 21/07/2016 00:54, Karl Denninger wrote:
>   io_type = ZIO_TYPE_FREE,
>   io_child_type = ZIO_CHILD_VDEV,
>   io_cmd = 0,
>   io_priority = ZIO_PRIORITY_TRIM,

>   io_flags = 789633,
>   io_stage = ZIO_STAGE_VDEV_IO_DONE,
>   io_pipeline = 3080192,
>   io_orig_flags = 525441,
>   io_orig_stage = ZIO_STAGE_OPEN,
>   io_orig_pipeline = 3080192,
>   io_error = 45,

>   vdev_notrim = 1,

>   vdev_queue = {
> vq_vdev = 0xf804d8683000,
> vq_class = 0xf804d86833e8,
> vq_active_tree = {
>   avl_root = 0xf80290a71240,
>   avl_compar = 0x8220b8a0 ,
>   avl_offset = 576,
>   avl_numnodes = 64,
>   avl_size = 952
> },
> vq_read_offset_tree = {
>   avl_root = 0x0,
>   avl_compar = 0x8220b8a0 ,
>   avl_offset = 600,
>   avl_numnodes = 0,
>   avl_size = 952
> },
> vq_write_offset_tree = {
>   avl_root = 0x0,
>   avl_compar = 0x8220b8a0 ,
>   avl_offset = 600,
>   avl_numnodes = 0,
>   avl_size = 952
> },
>   },

Karl,

thank you for the data.
Was this a freshly imported pool?  Or a pool that was not written to
since the import until shortly before the crash?

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Problem with libjpeg when updating packages

2016-07-21 Thread Ronald Klop
On Thu, 21 Jul 2016 09:28:21 +0200, Jörg Preiß   
wrote:



Hi all,

I just updated from 10.3 to stable. This worked fine, except when pkg
install updated the packages. There was a conflict within the libjpeg
package (sorry, I did not write down the exact message).

Now I don't know how to get slim working again, because libjpeg.so.8 is
not found.

So, maybe there is still an error in some update scripts or something
like that?




Reinstall slim. That should install its dependencies and/or give the error  
again.

pkg install -f slim

Regards,
Ronald.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: Problem with libjpeg when updating packages

2016-07-21 Thread J. Preiss
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

No, it didn't. I already installed turbo-jpeg manually, but the error
was still the same. So I uninstalled turbo-jpeg, which uninstalled
almost everything from the system.

Then I installed jpeg, then slim and all the xfce stuff, which then
again removed 1 package - I assume jpeg - and reinstalled xfce, which
reinstalled turbo-jpeg.

It is a very odd behavior, but now my X is up and running again.



On 21.07.2016 10:29, Ronald Klop wrote:
> On Thu, 21 Jul 2016 09:28:21 +0200, Jörg Preiß
>  wrote:
> 
>> Hi all,
>> 
>> I just updated from 10.3 to stable. This worked fine, except when
>> pkg install updated the packages. There was a conflict within the
>> libjpeg package (sorry, I did not write down the exact message).
>> 
>> Now I don't know how to get slim working again, because
>> libjpeg.so.8 is not found.
>> 
>> So, maybe there is still an error in some update scripts or
>> something like that?
>> 
>> 
> 
> Reinstall slim. That should install its dependencies and/or give
> the error again. pkg install -f slim
> 
> Regards, Ronald. ___ 
> freebsd-stable@freebsd.org mailing list 
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable To
> unsubscribe, send any mail to
> "freebsd-stable-unsubscr...@freebsd.org"
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJXkI1MAAoJEKwOgNn9tqKhm5AIAIgvj/3A6TJi+wzQuEJaZS/j
FiyZw91zpM7IMwhLr+60/xK2oforYnRggm6tGaPEx/BIj0szq/vIkMxkYAXdp85O
5u1YInlIeGphRMWCSsaps8aHgXpzjbA5uO0y+MGcCiMY5yuRuOLsJ4cXIJpcSkc+
Dg+OeGv4TTSHizingbopQDFolhuyQrQCR8OSIiN9fSQC/zrYx6wCsRuYfGBAXPe7
AvEX4q1H88Q0t04apPG5JRjsWVrhIRJfoja4cMM2V1o9NTdqvaA+ujM/zeylSC6M
Y0CD7Cv3bkNgNEgI6hdXuwk1X/r60WS+P0gwLX5v6QEYSSTeQakW/9QH/O3rYYg=
=AOWO
-END PGP SIGNATURE-
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: Panic on BETA1 in the ZFS subsystem

2016-07-21 Thread Karl Denninger
On 7/21/2016 03:07, Andriy Gapon wrote:
> On 21/07/2016 00:54, Karl Denninger wrote:
>>   io_type = ZIO_TYPE_FREE,
>>   io_child_type = ZIO_CHILD_VDEV,
>>   io_cmd = 0,
>>   io_priority = ZIO_PRIORITY_TRIM,
>>   io_flags = 789633,
>>   io_stage = ZIO_STAGE_VDEV_IO_DONE,
>>   io_pipeline = 3080192,
>>   io_orig_flags = 525441,
>>   io_orig_stage = ZIO_STAGE_OPEN,
>>   io_orig_pipeline = 3080192,
>>   io_error = 45,
>>   vdev_notrim = 1,
>>   vdev_queue = {
>> vq_vdev = 0xf804d8683000,
>> vq_class = 0xf804d86833e8,
>> vq_active_tree = {
>>   avl_root = 0xf80290a71240,
>>   avl_compar = 0x8220b8a0 ,
>>   avl_offset = 576,
>>   avl_numnodes = 64,
>>   avl_size = 952
>> },
>> vq_read_offset_tree = {
>>   avl_root = 0x0,
>>   avl_compar = 0x8220b8a0 ,
>>   avl_offset = 600,
>>   avl_numnodes = 0,
>>   avl_size = 952
>> },
>> vq_write_offset_tree = {
>>   avl_root = 0x0,
>>   avl_compar = 0x8220b8a0 ,
>>   avl_offset = 600,
>>   avl_numnodes = 0,
>>   avl_size = 952
>> },
>>   },
> Karl,
>
> thank you for the data.
> Was this a freshly imported pool?  Or a pool that was not written to
> since the import until shortly before the crash?
>
The crash occurred during a backup script operating, which is (roughly)
the following:

zpool import -N backup (mount the pool to copy to)

iterate over a list of zfs filesystems and...

zfs rename fs@zfs-base fs@zfs-old
zfs snapshot fs@zfs-base
zfs send -RI fs@zfs-old fs@zfs-base | zfs receive -Fudv backup
zfs destroy -vr fs@zfs-old

The first filesystem to be done is the rootfs, that is when it panic'd,
and from the traceback it appears that the Zio's in there are from the
backup volume, so the answer to your question is "yes".

This is a different panic that I used to get on 10.2 (the other one was
always in dounmount) and the former symptom was also not immediately
reproducable; whatever was blowing it up before was in-core, and a
reboot would clear it.  This one is not; I (foolishly) believed that the
operation would succeed after the reboot and re-attempted it, only to
get an immediate repeat of the same panic (with an essentially-identical
traceback.)

What allowed the operation to succeed was removing *all* of the
snapshots (other than the base filesystem, of course) from both the
source *and* backup destination zpools, then re-running the operation. 
That causes a "base" copy to be taken (zfs snapshot fs@zfs-base and then
just a straight send of that instead of an incremental), which was
successful.

The only thing that was odd about the zfs filesystem in question was
that as a boot environment that was my roll-forward to 11.0 its "origin"
was a clone of 10.2 before the install was done, so that snapshot was
present in the zfs snapshot list.  However, it had been present for
several days without incident, so I doubt its presence was involved in
the creation of the circumstances leading to the panic.

-- 
Karl Denninger
k...@denninger.net 
/The Market Ticker/
/[S/MIME encrypted email preferred]/


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Panic on BETA1 in the ZFS subsystem

2016-07-21 Thread Andriy Gapon
On 21/07/2016 15:25, Karl Denninger wrote:
> The crash occurred during a backup script operating, which is (roughly)
> the following:
> 
> zpool import -N backup (mount the pool to copy to)
> 
> iterate over a list of zfs filesystems and...
> 
> zfs rename fs@zfs-base fs@zfs-old
> zfs snapshot fs@zfs-base
> zfs send -RI fs@zfs-old fs@zfs-base | zfs receive -Fudv backup
> zfs destroy -vr fs@zfs-old
> 
> The first filesystem to be done is the rootfs, that is when it panic'd,
> and from the traceback it appears that the Zio's in there are from the
> backup volume, so the answer to your question is "yes".

I think that what happened here was that a quite large number of TRIM
requests was queued by ZFS before it had a chance to learn that the
target vdev in the backup pool did not support TRIM.  So, when the the
first request failed with ENOTSUP the vdev was marked as not supporting
TRIM.  After that all subsequent requests were failed without sending
them down the storage stack.  But the way it is done means that all the
requests were processed by the nested zio_execute() calls on the same
stack.  And that lead to the stack overflow.

Steve, do you think that this is a correct description of what happened?

The state of the pools that you described below probably contributed to
the avalanche of TRIMs that caused the problem.

> This is a different panic that I used to get on 10.2 (the other one was
> always in dounmount) and the former symptom was also not immediately
> reproducable; whatever was blowing it up before was in-core, and a
> reboot would clear it.  This one is not; I (foolishly) believed that the
> operation would succeed after the reboot and re-attempted it, only to
> get an immediate repeat of the same panic (with an essentially-identical
> traceback.)
> 
> What allowed the operation to succeed was removing *all* of the
> snapshots (other than the base filesystem, of course) from both the
> source *and* backup destination zpools, then re-running the operation. 
> That causes a "base" copy to be taken (zfs snapshot fs@zfs-base and then
> just a straight send of that instead of an incremental), which was
> successful.
> 
> The only thing that was odd about the zfs filesystem in question was
> that as a boot environment that was my roll-forward to 11.0 its "origin"
> was a clone of 10.2 before the install was done, so that snapshot was
> present in the zfs snapshot list.  However, it had been present for
> several days without incident, so I doubt its presence was involved in
> the creation of the circumstances leading to the panic.


-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Panic on BETA1 in the ZFS subsystem

2016-07-21 Thread Karl Denninger
On 7/21/2016 07:52, Andriy Gapon wrote:
> On 21/07/2016 15:25, Karl Denninger wrote:
>> The crash occurred during a backup script operating, which is (roughly)
>> the following:
>>
>> zpool import -N backup (mount the pool to copy to)
>>
>> iterate over a list of zfs filesystems and...
>>
>> zfs rename fs@zfs-base fs@zfs-old
>> zfs snapshot fs@zfs-base
>> zfs send -RI fs@zfs-old fs@zfs-base | zfs receive -Fudv backup
>> zfs destroy -vr fs@zfs-old
>>
>> The first filesystem to be done is the rootfs, that is when it panic'd,
>> and from the traceback it appears that the Zio's in there are from the
>> backup volume, so the answer to your question is "yes".
> I think that what happened here was that a quite large number of TRIM
> requests was queued by ZFS before it had a chance to learn that the
> target vdev in the backup pool did not support TRIM.  So, when the the
> first request failed with ENOTSUP the vdev was marked as not supporting
> TRIM.  After that all subsequent requests were failed without sending
> them down the storage stack.  But the way it is done means that all the
> requests were processed by the nested zio_execute() calls on the same
> stack.  And that lead to the stack overflow.
>
> Steve, do you think that this is a correct description of what happened?
>
> The state of the pools that you described below probably contributed to
> the avalanche of TRIMs that caused the problem.
>

The source for the backup a pool that is comprised entirely of SSDs (and
thus supports TRIM), and the target is a pair of spinning rust devices
(which of course do not support TRIM); the incremental receive to that
pool does (of course) remove all the obsolete snapshots.

What I don't understand however, is why it has been running fine for a
week or so, and why it immediately repeated the panic on a retry attempt
-- or how to prevent it, at least at this point.  I certainly do not
want to leave the pool mounted when not in active backup use.

-- 
Karl Denninger
k...@denninger.net 
/The Market Ticker/
/[S/MIME encrypted email preferred]/


smime.p7s
Description: S/MIME Cryptographic Signature


Jenkins build is still unstable: FreeBSD_stable_10 #334

2016-07-21 Thread jenkins-admin
See 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Jenkins build is still unstable: FreeBSD_stable_10 #335

2016-07-21 Thread jenkins-admin
See 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"