[Touch-packages] [Bug 1867949] Re: It's time to increase the default pid_max from 32768 to avoid PID wraparounds/collossions

2022-09-07 Thread Trent Lloyd
This happens now on Jammy (22.04) on 64-bit (not on 32-bit due to system
limits)

systemd ships a default /usr/lib/sysctl.d/50-pid-max.conf, as per upstream 
commit here:
https://github.com/systemd/systemd/pull/12226


** Changed in: procps (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to procps in Ubuntu.
https://bugs.launchpad.net/bugs/1867949

Title:
  It's time to increase the default pid_max from 32768 to avoid PID
  wraparounds/collossions

Status in procps package in Ubuntu:
  Fix Released

Bug description:
  The kernel.pid_max sysctl defaults to 32768. This is a very historic
  limit to provide compatibility with ancient binaries.

  Moving on to the year 2020 multicore CPU:s for desktops, laptops and
  servers is the standard, and together with PID randomization
  wraparound happens rather quickly on many-core machines with lots of
  activity. Wraparounds in itself is not a big issue, but there are
  corner cases like scripts that checks if a PID is alive etc that run
  into trouble if another process has started using the PID it expects,
  scripts (erroneously) using PIDs for work/temporary files, etc.

  To avoid problems within the lifetime of Ubuntu Focal, it's time to
  increase kernel.pid_max by default in the distribution by including
  tuning in a file in /etc/sysctl.d/

  Our suggestion is to ship the following tuning by default:

  # Make PID-rollover not happen as often.
  # Default is 32768
  kernel.pid_max = 99

  with the following motivation:

  1) It achieves a 30-fold increase in the available number-space,
  reducing the likelihood of PID wraparound/collisions.

  2) It only adds one digit to the PID, so it's still possible to
  remember a PID

  3) Output in top, ps, etc is still nicely readable

  3) We have used it for years on Ubuntu 14.04 and onwards, on 1000+
  machines and with a wide array of commercial and scientific software
  without any issues.

  4) One could argue that it is a preventive security measure, there are
  a lot of weirdly written scripts and software out there that behaves
  badly upon PID reuse/collissions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1867949/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1077796] Re: /bin/kill no longer works with negative PID

2021-12-16 Thread Trent Lloyd
Most shells (including bash, zsh) have a built-in for kill so it's done
internally. Some shells don't so it executes /bin/kill instead which has
this issue.

One comment noted this was fixed at some point in 2013 in version 3.3.4
but it apparently broke again at some point and is broken at least in
20.04 Focal's v3.3.16.

This was recently fixed again upstream here:
https://gitlab.com/procps-ng/procps/-/merge_requests/77

Upstream v3.3.16 (in 20.04 Focal and 20.10 Hirsute) was released Dec
2019 without this fix. That fix was submitted upstream 3 years ago but
only merged 11 months ago and was included in the v3.3.17 release which
was made in Feb 2021 so not included in 20.04 Focal. 3.3.17 with the fix
is already in 21.10 Impish.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to procps in Ubuntu.
https://bugs.launchpad.net/bugs/1077796

Title:
  /bin/kill no longer works with negative PID

Status in procps package in Ubuntu:
  Triaged

Bug description:
  Incorrectly gives a usage message, eg:

  /bin/kill -1 -4321
  ==>
  /bin/kill: invalid option -- '4'

  A workaround appears to be to use "--" before the PID, but this is
  unsatisfactory, as existing scripts would have to be modified when
  they shouldn't have to be.

  This problem has only appeared recently in 12.10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1077796/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1958148] Re: mkinitramfs is too slow

2022-02-28 Thread Trent Lloyd
Where is the discussion happening?

I ran the same benchmarks for my i7-6770HQ 4-core system. This really
needs revising.

While disk space using in /boot is a concern, in this example at least
-10 would use only 8MB (10%) more space and cut the time taken from 2m1s
to 13s.

zstd.0 84M 0m2.150s
zstd.1 96M 0m1.236s
zstd.2 90M 0m1.350s
zstd.3 84M 0m2.235s
zstd.4 84M 0m3.355s
zstd.5 81M 0m5.679s
zstd.6 81M 0m7.416s
zstd.7 78M 0m8.857s
zstd.8 77M 0m10.134s
zstd.9 77M 0m11.238s
zstd.10 72M 0m13.232s
zstd.11 72M 0m14.897s
zstd.12 72M 0m19.343s
zstd.13 72M 0m26.327s
zstd.14 72M 0m30.948s
zstd.15 72M 0m40.913s
zstd.16 70M 0m59.517s
zstd.17 66M 1m15.854s
zstd.18 64M 1m36.227s
zstd.19 64M 2m1.417s

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to initramfs-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1958148

Title:
  mkinitramfs is too slow

Status in initramfs-tools package in Ubuntu:
  In Progress

Bug description:
  On the Nezha board creating an initrd takes more than 1 hour. The
  compression level -19 for zstd is unwisely chosen.

  Here are durations and compression results for an SiFive Unmatched
  board with four cores using zstd:

  -T0 -1 => 13.92s, 136298155 bytes
  -T0 -2 => 15.73s, 131717830 bytes
  -T0 -3 => 26.11s, 127434653 bytes
  -T0 -4 => 29.31s, 126924540 bytes
  -T0 -5 => 36.44s, 125296557 bytes
  -T0 -6 => 39.36s, 124781669 bytes
  -T0 -7 => 46.56s, 116200665 bytes
  -T0 -8 => 51.95s, 113172941 bytes
  -T0 -9 => 55.89s, 112835937 bytes
  -T0 -10 => 61.32s, 108326876 bytes
  -T0 -11 => 64.32s, 108115060 bytes
  -T0 -12 => 76.37s, 108016478 bytes
  -T0 -13 => 148.99s, 109121308 bytes
  -T0 -14 => 156.58s, 108908574 bytes
  -T0 -15 => 228.64s, 109213554 bytes
  -T0 -16 => 380.26s, 107260643 bytes
  -T0 -17 => 453.36s, 103679714 bytes
  -T0 -18 => 714.79s, 100402249 bytes
  -T0 -19 => 1046.58s, 100188713 bytes

  Compression level between -2 to -10 offer a good compromise between
  CPU time and compression results.

  Ideally there would be a parameter that we could pass to mkinitfs. But
  for as fast solution we should simply replace -19 by -9 in
  mkinitramfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1958148/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1339518] Re: sudo config file specifies group "admin" that doesn't exist in system

2021-11-17 Thread Trent Lloyd
Just noticed this today, it's still the same on Ubuntu 20.04. The
default sudoers file ships the admin group having sudo privileges but
the group doesn't exist by default.

While it doesn't have out of the box security implications, I think this
is a security concern as someone could potentially add an 'admin' user
and not expect them to get sudo access with the default matching group
name created for them.

For example downstream products like web hosting or control panel style
tools that creates users with a user-provided name. Since neither the
user or group 'admin' exists by default they could be fooled into
creating escalatable privileges.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to sudo in Ubuntu.
https://bugs.launchpad.net/bugs/1339518

Title:
  sudo config file specifies group "admin" that doesn't exist in system

Status in sudo package in Ubuntu:
  Confirmed

Bug description:
  
  In the configuration file for sudo ( /etc/sudoers ) you find this section:

  # Members of the admin group may gain root privileges
  %admin ALL=(ALL) ALL

  # Allow members of group sudo to execute any command
  %sudo   ALL=(ALL:ALL) ALL

  The sudo group is in /etc/group, but not admin group. This is a
  cosmetic bug, but if we specify a group that are allowed to use sudo
  command, then the group should exist in the system too.

  Installed version: Ubuntu 14.04 LTS all upgrades up to 9 july 2014
  installed, 64 bit desktop ISO used for installation.

  Sudo package installed:
  ii  sudo1.8.9p5-1ubuntu1   amd64  Provide 
limited super user privileges to specific users

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sudo/+bug/1339518/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1339518] Re: sudo config file specifies group "admin" that doesn't exist in system

2021-11-17 Thread Trent Lloyd
Subscribing Marc as he seems to be largely maintaining this and made the
original changes and has been keeping the delta. Hopefully he can
provide some insight.

Seems this is a delta to Debian that is being kept intentionally for a
long time, it's frequently in the changelog even in the most recent
Debian merge.

I'd have thought if we kept this in here by default we probably should
have kept a default 'admin' group with no members but it's a bit late
for that at this point.

- debian/sudoers:
 + also grant admin group sudo access

Also seems this change was originally made in 2014:

sudo (1.8.9p5-1ubuntu3) vivid; urgency=medium

  * debian/patches/also_check_sudo_group.diff: also check the sudo group
in plugins/sudoers/sudoers.c to create the admin flag file. Leave the
admin group check for backwards compatibility. (LP: #1387347)

 -- Marc Deslauriers   Wed, 29 Oct 2014
15:55:34 -0400

sudo (1.8.9p5-1ubuntu2) utopic; urgency=medium

  * debian/sudo_root.8: mention sudo group instead of deprecated group
admin (LP: #1130643)

 -- Andrey Bondarenko   Sat, 23 Aug
2014 01:18:05 +0600

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to sudo in Ubuntu.
https://bugs.launchpad.net/bugs/1339518

Title:
  sudo config file specifies group "admin" that doesn't exist in system

Status in sudo package in Ubuntu:
  Confirmed

Bug description:
  
  In the configuration file for sudo ( /etc/sudoers ) you find this section:

  # Members of the admin group may gain root privileges
  %admin ALL=(ALL) ALL

  # Allow members of group sudo to execute any command
  %sudo   ALL=(ALL:ALL) ALL

  The sudo group is in /etc/group, but not admin group. This is a
  cosmetic bug, but if we specify a group that are allowed to use sudo
  command, then the group should exist in the system too.

  Installed version: Ubuntu 14.04 LTS all upgrades up to 9 july 2014
  installed, 64 bit desktop ISO used for installation.

  Sudo package installed:
  ii  sudo1.8.9p5-1ubuntu1   amd64  Provide 
limited super user privileges to specific users

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sudo/+bug/1339518/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1952496] Re: ubuntu 20.04 LTS network problem

2021-11-28 Thread Trent Lloyd
To assist with this can you get the following outputs from the broken
system:

# Change 'hostname.local' to the hostname expected to work

cat /etc/nsswitch.conf

systemctl status avahi-daemon

journalctl -u avahi-daemon --boot

avahi-resolve-host-name hostname.local

getent hosts hostname.local

** Changed in: avahi (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1952496

Title:
  ubuntu 20.04 LTS network problem

Status in avahi package in Ubuntu:
  Incomplete

Bug description:
  Network afp only partially works. Two tries, Elementary OS 6, Ubuntu
  20.04 installs both have identical problem. Resolved by reinstalling
  Ubuntu 18.04 LTS. Using different kernels doesn't matter. Specific
  problem is with afp protocol name resolution. Searching local network
  for ip address using afp://ip_address did connect, but using GUI
  Files/Other Locations/Networks with named hosts will not. Always
  returns can't resolve hostname.local. Tried multiple edits of
  different network files, even installed netatalk in attempts. Again,
  solution is to go back to downgrade, works right out of the box, as
  did on another, and this before upgrade. This was only attempted with
  2 other computers, one mac, one ubuntu/elementary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1952496/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1952496] Re: ubuntu 20.04 LTS network problem

2021-11-28 Thread Trent Lloyd
As a side note, it may be time to switch to a new protocol. As even
Apple has dropped support for sharing AFP versions in the last few
releases and is deprecating it's usage. You can use Samba to do SMBFS
including the extra special apple stuff if you need timemachine support
etc on your NAS

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1952496

Title:
  ubuntu 20.04 LTS network problem

Status in avahi package in Ubuntu:
  Incomplete

Bug description:
  Network afp only partially works. Two tries, Elementary OS 6, Ubuntu
  20.04 installs both have identical problem. Resolved by reinstalling
  Ubuntu 18.04 LTS. Using different kernels doesn't matter. Specific
  problem is with afp protocol name resolution. Searching local network
  for ip address using afp://ip_address did connect, but using GUI
  Files/Other Locations/Networks with named hosts will not. Always
  returns can't resolve hostname.local. Tried multiple edits of
  different network files, even installed netatalk in attempts. Again,
  solution is to go back to downgrade, works right out of the box, as
  did on another, and this before upgrade. This was only attempted with
  2 other computers, one mac, one ubuntu/elementary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1952496/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1952496] Re: ubuntu 20.04 LTS network problem

2021-11-29 Thread Trent Lloyd
Thanks for the data. I can see you queried 'steven-ubuntu.local' and
that looks like the hostname of the local machine. Can you also query
the hostname of the AFP server you are trying to connect to (using both
getent hosts and avahi-resolve-host-name).

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1952496

Title:
  ubuntu 20.04 LTS network problem

Status in avahi package in Ubuntu:
  Incomplete

Bug description:
  Network afp only partially works. Two tries, Elementary OS 6, Ubuntu
  20.04 installs both have identical problem. Resolved by reinstalling
  Ubuntu 18.04 LTS. Using different kernels doesn't matter. Specific
  problem is with afp protocol name resolution. Searching local network
  for ip address using afp://ip_address did connect, but using GUI
  Files/Other Locations/Networks with named hosts will not. Always
  returns can't resolve hostname.local. Tried multiple edits of
  different network files, even installed netatalk in attempts. Again,
  solution is to go back to downgrade, works right out of the box, as
  did on another, and this before upgrade. This was only attempted with
  2 other computers, one mac, one ubuntu/elementary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1952496/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 2021409] Re: mdns failng with current versions of libnss-mdns and avahi-daemon

2023-05-28 Thread Trent Lloyd
If the primary issue is that other devices can only resolve your
hostname after restarting avahi-daemon for a short time, plus, this
machine doesn't see anything else on the network, it means that for one
reason or another mDNS packets on port 5353 are not making it back to
this host.

The overwhelming majority of such cases are due to a bug in the driver
for that network interface, almost always its a wireless interface or
some kind - it's rare in wired ethernet drivers but not impossible.

Very often setting the interface to promiscuous mode will fix this, because it 
tells the NIC to receive all packets (by default NICs filter out packets not 
intended for the host). You can try this command to set it:
ip link set ens160 promisc on

If this solves the issue, then it's 100% a buggy driver and I cannot do
anything about that at the avahi level. You'll need to look at getting
the driver fixed. This is common with bad wifi drivers, very rare with
ethernet.

If that doesn't help, it could be the driver also doesn't support promiscuous 
but I've not really seen that before. I would then check
- Firewall on this machine,
- Any features of your wireless network such as "multicast dns optimisation", 
etc

If you run tcpdump for port 5353 you'll see packets coming and going. I suspect 
most likely your machine never receives any query packets from another host. So 
you can try run this command and then running a query from another host:
sudo tcpdump -ni ens160 --no-promiscuous-mode port 5353 

You can also try this without "--no-promiscuous-mode" (by default
tcpdump puts the interface in promiscious mode, same as the above
command).


As a final note the resolvectl/resolved mdns support is totally
independent of avahi, having it enabled could cause mDNS to fail in some
cases - particularly with a network that performs 'multicast dns
optimisation' where the packets are converted from multicast to unicast.
Only a single process can bind to port 5353 for unicast packets and the
packets will randomly get sent to resolved, avahi, chrome.. or any other
process listening on port 5353. So disabling it (and anything else
listening on 5353) could help in some specific circumstances though
usually not necessary in most cases.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/2021409

Title:
  mdns failng with current versions of libnss-mdns and avahi-daemon

Status in Avahi:
  New
Status in avahi package in Ubuntu:
  New
Status in nss-mdns package in Ubuntu:
  New

Bug description:
  I have Ubuntu Server 22.04.2

  I have installed avahi-daemon and libnss-mdns, but mdns resolution is
  not occurring over my primary network interface:

  Note that this is with `allow-interfaces=ens160` set /etc/avahi/avahi-
  daemon.conf.

  (22:25:17 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [648] ~ $ sudo resolvectl mdns ens160 1

  (22:25:22 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [649] ~ $ resolvectl status
  Global
 Protocols: -LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub

  Link 2 (ens160)
  Current Scopes: DNS mDNS/IPv4 mDNS/IPv6
   Protocols: +DefaultRoute +LLMNR +mDNS -DNSOverTLS 
DNSSEC=no/unsupported
  Current DNS Server: 2601:647:6680:4b95::1
 DNS Servers: 10.1.30.1 2601:647:6680:4b95::1
  DNS Domain: localdomain

  (22:25:27 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [650] ~ $ avahi-resolve -n cid.local
  Failed to resolve host name 'cid.local': Timeout reached

  (22:26:03 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [651] ~ $ avahi-resolve -a 10.0.30.1
  Failed to resolve address '10.0.30.1': Timeout reached

  (22:26:13 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [652] ~ $ sudo systemctl status avahi-daemon
  ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
   Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; 
vendor preset: enabled)
   Active: active (running) since Sat 2023-05-27 22:22:29 UTC; 3min 51s ago
  TriggeredBy: ● avahi-daemon.socket
 Main PID: 850 (avahi-daemon)
   Status: "avahi-daemon 0.8 starting up."
Tasks: 2 (limit: 4523)
   Memory: 1.5M
  CPU: 15ms
   CGroup: /system.slice/avahi-daemon.service
   ├─850 "avahi-daemon: running [cid.local]"
   └─891 "avahi-daemon: chroot helper"

  May 27 22:22:29 cid avahi-daemon[850]: No service file found in 
/etc/avahi/services.
  May 27 22:22:29 cid avahi-daemon[850]: Joining mDNS multicast group on 
interface ens160.IPv6 with address 2601:647:6680:4b95:20c:29ff:fe42:f399.
  May 27 22:22:29 cid avahi-daemon[850]: New relevant interface ens160.IPv6 for 
mDNS.
  May 27 22:22:29 cid avahi-daemon[850]: Joining mDNS multicast group on 
interface ens160.IPv4 with address 10.1.30.2.
  May 27 22:22:29 cid avahi-daemon[850]: New relevant interface ens160.IPv4 for 
mDNS.
  May 27 22:22:29 cid avahi-daemon[850]: Network

[Touch-packages] [Bug 2021409] Re: mdns failng with current versions of libnss-mdns and avahi-daemon

2023-05-28 Thread Trent Lloyd
There's a detail in the github issue that wasn't noted here:
I have Ubuntu Server 22.04.2 running in a VM (VMWare 13.0.2) on an Apple 
Silicon Mac running macOS 13.4.

That makes the network driver situation less likely but the firewall
situation still possible.

A few extra questions:
What kind of network is fusion set to - bridged, nat, host-only, etc
+ If you are using bridged, but to your macs wireless adapter, I have also 
experienced this not working as expected. Try use ethernet via a USB-Ethernet 
or native ethernet if your machine has it - and see if it works then. That was 
the case for me. If you google "vmware fusion bridge wifi" there are lots of 
posts with slightly different setups and symptoms.
+ If you are using nat/host-only I would expect only to be able to resolve 
hostnames with the host Mac and not the rest of the network

Run tcpdump as directed, I am curious if you see any packets sent from
any other node on your network. I am guessing not. If you are, it would
be ideal to capture a pcap file (a copy of all the network packets on
5353) from both this linux VM, as well as another linux machine on the
network that is not the host mac:L

tcpdump -i  -s 65535 -w lp2021409-$(hostname).pcap port 5353

Then run a query from both the VM and external host, for the other
machine. While the pcap is running. Then stop it. Upload both files and
note the IP addresses and hostnames of both machines.


It's highly likely this is not a bug in avahi or nss-mdns but a network issue 
of some kind. Hopefully this will get you going in the right direction.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/2021409

Title:
  mdns failng with current versions of libnss-mdns and avahi-daemon

Status in Avahi:
  New
Status in avahi package in Ubuntu:
  New
Status in nss-mdns package in Ubuntu:
  New

Bug description:
  I have Ubuntu Server 22.04.2

  I have installed avahi-daemon and libnss-mdns, but mdns resolution is
  not occurring over my primary network interface:

  Note that this is with `allow-interfaces=ens160` set /etc/avahi/avahi-
  daemon.conf.

  (22:25:17 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [648] ~ $ sudo resolvectl mdns ens160 1

  (22:25:22 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [649] ~ $ resolvectl status
  Global
 Protocols: -LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub

  Link 2 (ens160)
  Current Scopes: DNS mDNS/IPv4 mDNS/IPv6
   Protocols: +DefaultRoute +LLMNR +mDNS -DNSOverTLS 
DNSSEC=no/unsupported
  Current DNS Server: 2601:647:6680:4b95::1
 DNS Servers: 10.1.30.1 2601:647:6680:4b95::1
  DNS Domain: localdomain

  (22:25:27 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [650] ~ $ avahi-resolve -n cid.local
  Failed to resolve host name 'cid.local': Timeout reached

  (22:26:03 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [651] ~ $ avahi-resolve -a 10.0.30.1
  Failed to resolve address '10.0.30.1': Timeout reached

  (22:26:13 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [652] ~ $ sudo systemctl status avahi-daemon
  ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
   Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; 
vendor preset: enabled)
   Active: active (running) since Sat 2023-05-27 22:22:29 UTC; 3min 51s ago
  TriggeredBy: ● avahi-daemon.socket
 Main PID: 850 (avahi-daemon)
   Status: "avahi-daemon 0.8 starting up."
Tasks: 2 (limit: 4523)
   Memory: 1.5M
  CPU: 15ms
   CGroup: /system.slice/avahi-daemon.service
   ├─850 "avahi-daemon: running [cid.local]"
   └─891 "avahi-daemon: chroot helper"

  May 27 22:22:29 cid avahi-daemon[850]: No service file found in 
/etc/avahi/services.
  May 27 22:22:29 cid avahi-daemon[850]: Joining mDNS multicast group on 
interface ens160.IPv6 with address 2601:647:6680:4b95:20c:29ff:fe42:f399.
  May 27 22:22:29 cid avahi-daemon[850]: New relevant interface ens160.IPv6 for 
mDNS.
  May 27 22:22:29 cid avahi-daemon[850]: Joining mDNS multicast group on 
interface ens160.IPv4 with address 10.1.30.2.
  May 27 22:22:29 cid avahi-daemon[850]: New relevant interface ens160.IPv4 for 
mDNS.
  May 27 22:22:29 cid avahi-daemon[850]: Network interface enumeration 
completed.
  May 27 22:22:29 cid avahi-daemon[850]: Registering new address record for 
2601:647:6680:4b95:20c:29ff:fe42:f399 on ens160.*.
  May 27 22:22:29 cid avahi-daemon[850]: Registering new address record for 
10.1.30.2 on ens160.IPv4.
  May 27 22:22:30 cid avahi-daemon[850]: Server startup complete. Host name is 
cid.local. Local service cookie is 2371061072.
  May 27 22:26:11 cid avahi-daemon[850]: wide-area.c: Query timed out.

  (22:27:14 Sat May 27 2023 jeremy@cid pts/0 aarch64)
  [657] ~ $ grep hosts /etc/nsswitch.conf 
  hosts:  files mdns4_minimal [NOTFOUND=return] dns

  ---

  If I set 'allow-interfaces=ens1

[Touch-packages] [Bug 1641236] Re: Confined processes inside container cannot fully access host pty device passed in by lxc exec

2022-10-25 Thread Trent Lloyd
Note: This affects SSH as well.. not only lxc exec. There is a currently
marked duplicate bug about the SSH part in Bug #1667016

This still persits on focal now. To workaround this for me I have to
*both* use tcpdump with -l (line buffered mode) *and* pipe the output to
cat. You also want to redirect stderr otherwise it's silently lost.

# tcpdump -lni o-hm0 2>&1|cat


The apparmor errors I get are:
[ 6414.508990] audit: type=1400 audit(1666764106.013:360): apparmor="DENIED" 
operation="file_inherit" 
namespace="root//lxd-juju-5062b7-2-lxd-3_" 
profile="/usr/sbin/tcpdump" name="/dev/pts/2" pid=187936 comm="tcpdump" 
requested_mask="wr" denied_mask="wr" fsuid=100 ouid=1001000


I have determined the cause, which is that tcpdump is one of the few programs 
with its own restrictive apparmor profile (/etc/apparmor.d/usr.sbin.tcpdump). 
As part of that it locks down /dev to read-only:
> /dev/ r,

However that also means /dev/pts is read-only, hence the error above
denies write access.

There is an abstraction #include  which adds
access to /dev/pts and other console outputs. It's included also in the
profile for usr.bin.man.

Including this abstraction resolves the issue for me. I'll upload a
patch.

** Also affects: tcpdump (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: apparmor (Ubuntu)
   Status: Confirmed => Invalid

** Changed in: tcpdump (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apparmor in Ubuntu.
https://bugs.launchpad.net/bugs/1641236

Title:
  Confined processes inside container cannot fully access host pty
  device passed in by lxc exec

Status in apparmor package in Ubuntu:
  Invalid
Status in lxd package in Ubuntu:
  Invalid
Status in tcpdump package in Ubuntu:
  Confirmed

Bug description:
  Now that AppArmor policy namespaces and profile stacking is in place,
  I noticed odd stdout buffering behavior when running confined
  processes via lxc exec. Much more data stdout data is buffered before
  getting flushed when the program is confined by an AppArmor profile
  inside of the container.

  I see that lxd is calling openpty(3) in the host environment, using
  the returned fd as stdout, and then executing the command inside of
  the container. This results in an AppArmor denial because the file
  descriptor returned by openpty(3) originates outside of the namespace
  used by the container.

  The denial is likely from glibc calling fstat(), from inside the
  container, on the file descriptor associated with stdout to make a
  decision on how much buffering to use. The fstat() is denied by
  AppArmor and glibc ends up handling the buffering differently than it
  would if the fstat() would have been successful.

  Steps to reproduce (using an up-to-date 16.04 amd64 VM):

  Create a 16.04 container
  $ lxc launch ubuntu-daily:16.04 x

  Run tcpdump in one terminal and generate traffic in another terminal (wget 
google.com)
  $ lxc exec x -- tcpdump -i eth0
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
  
  47 packets captured
  48 packets received by filter
  1 packet dropped by kernel
  

  Note that everything above  was printed immediately
  because it was printed to stderr. , which is printed to
  stdout, was not printed until you pressed ctrl-c and the buffers were
  flushed thanks to the program terminating. Also, this AppArmor denial
  shows up in the logs:

  audit: type=1400 audit(1478902710.025:440): apparmor="DENIED"
  operation="getattr" info="Failed name lookup - disconnected path"
  error=-13 namespace="root//lxd-x_"
  profile="/usr/sbin/tcpdump" name="dev/pts/12" pid=15530 comm="tcpdump"
  requested_mask="r" denied_mask="r" fsuid=165536 ouid=165536

  Now run tcpdump unconfined and take note that  is printed 
immediately, before you terminate tcpdump. Also, there are no AppArmor denials.
  $ lxc exec x -- aa-exec -p unconfined -- tcpdump -i eth0
  ...

  Now run tcpdump confined but in lxc exec's non-interactive mode and note that 
 is printed immediately and no AppArmor denials are present. 
(Looking at the lxd code in lxd/container_exec.go, openpty(3) is only called in 
interactive mode)
  $ lxc exec x --mode=non-interactive -- tcpdump -i eth0
  ...

  Applications that manually call fflush(stdout) are not affected by
  this as manually flushing stdout works fine. The problem seems to be
  caused by glibc not being able to fstat() the /dev/pts/12 fd from the
  host's namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1641236/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1641236] Re: Confined processes inside container cannot fully access host pty device passed in by lxc exec

2022-10-25 Thread Trent Lloyd
** Changed in: tcpdump (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apparmor in Ubuntu.
https://bugs.launchpad.net/bugs/1641236

Title:
  Confined processes inside container cannot fully access host pty
  device passed in by lxc exec

Status in apparmor package in Ubuntu:
  Invalid
Status in lxd package in Ubuntu:
  Invalid
Status in tcpdump package in Ubuntu:
  Confirmed

Bug description:
  Now that AppArmor policy namespaces and profile stacking is in place,
  I noticed odd stdout buffering behavior when running confined
  processes via lxc exec. Much more data stdout data is buffered before
  getting flushed when the program is confined by an AppArmor profile
  inside of the container.

  I see that lxd is calling openpty(3) in the host environment, using
  the returned fd as stdout, and then executing the command inside of
  the container. This results in an AppArmor denial because the file
  descriptor returned by openpty(3) originates outside of the namespace
  used by the container.

  The denial is likely from glibc calling fstat(), from inside the
  container, on the file descriptor associated with stdout to make a
  decision on how much buffering to use. The fstat() is denied by
  AppArmor and glibc ends up handling the buffering differently than it
  would if the fstat() would have been successful.

  Steps to reproduce (using an up-to-date 16.04 amd64 VM):

  Create a 16.04 container
  $ lxc launch ubuntu-daily:16.04 x

  Run tcpdump in one terminal and generate traffic in another terminal (wget 
google.com)
  $ lxc exec x -- tcpdump -i eth0
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
  
  47 packets captured
  48 packets received by filter
  1 packet dropped by kernel
  

  Note that everything above  was printed immediately
  because it was printed to stderr. , which is printed to
  stdout, was not printed until you pressed ctrl-c and the buffers were
  flushed thanks to the program terminating. Also, this AppArmor denial
  shows up in the logs:

  audit: type=1400 audit(1478902710.025:440): apparmor="DENIED"
  operation="getattr" info="Failed name lookup - disconnected path"
  error=-13 namespace="root//lxd-x_"
  profile="/usr/sbin/tcpdump" name="dev/pts/12" pid=15530 comm="tcpdump"
  requested_mask="r" denied_mask="r" fsuid=165536 ouid=165536

  Now run tcpdump unconfined and take note that  is printed 
immediately, before you terminate tcpdump. Also, there are no AppArmor denials.
  $ lxc exec x -- aa-exec -p unconfined -- tcpdump -i eth0
  ...

  Now run tcpdump confined but in lxc exec's non-interactive mode and note that 
 is printed immediately and no AppArmor denials are present. 
(Looking at the lxd code in lxd/container_exec.go, openpty(3) is only called in 
interactive mode)
  $ lxc exec x --mode=non-interactive -- tcpdump -i eth0
  ...

  Applications that manually call fflush(stdout) are not affected by
  this as manually flushing stdout works fine. The problem seems to be
  caused by glibc not being able to fstat() the /dev/pts/12 fd from the
  host's namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1641236/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1641236] Re: Confined processes inside container cannot fully access host pty device passed in by lxc exec

2022-10-25 Thread Trent Lloyd
** Changed in: apparmor (Ubuntu)
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apparmor in Ubuntu.
https://bugs.launchpad.net/bugs/1641236

Title:
  Confined processes inside container cannot fully access host pty
  device passed in by lxc exec

Status in apparmor package in Ubuntu:
  Confirmed
Status in lxd package in Ubuntu:
  Invalid
Status in tcpdump package in Ubuntu:
  Confirmed

Bug description:
  Now that AppArmor policy namespaces and profile stacking is in place,
  I noticed odd stdout buffering behavior when running confined
  processes via lxc exec. Much more data stdout data is buffered before
  getting flushed when the program is confined by an AppArmor profile
  inside of the container.

  I see that lxd is calling openpty(3) in the host environment, using
  the returned fd as stdout, and then executing the command inside of
  the container. This results in an AppArmor denial because the file
  descriptor returned by openpty(3) originates outside of the namespace
  used by the container.

  The denial is likely from glibc calling fstat(), from inside the
  container, on the file descriptor associated with stdout to make a
  decision on how much buffering to use. The fstat() is denied by
  AppArmor and glibc ends up handling the buffering differently than it
  would if the fstat() would have been successful.

  Steps to reproduce (using an up-to-date 16.04 amd64 VM):

  Create a 16.04 container
  $ lxc launch ubuntu-daily:16.04 x

  Run tcpdump in one terminal and generate traffic in another terminal (wget 
google.com)
  $ lxc exec x -- tcpdump -i eth0
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
  
  47 packets captured
  48 packets received by filter
  1 packet dropped by kernel
  

  Note that everything above  was printed immediately
  because it was printed to stderr. , which is printed to
  stdout, was not printed until you pressed ctrl-c and the buffers were
  flushed thanks to the program terminating. Also, this AppArmor denial
  shows up in the logs:

  audit: type=1400 audit(1478902710.025:440): apparmor="DENIED"
  operation="getattr" info="Failed name lookup - disconnected path"
  error=-13 namespace="root//lxd-x_"
  profile="/usr/sbin/tcpdump" name="dev/pts/12" pid=15530 comm="tcpdump"
  requested_mask="r" denied_mask="r" fsuid=165536 ouid=165536

  Now run tcpdump unconfined and take note that  is printed 
immediately, before you terminate tcpdump. Also, there are no AppArmor denials.
  $ lxc exec x -- aa-exec -p unconfined -- tcpdump -i eth0
  ...

  Now run tcpdump confined but in lxc exec's non-interactive mode and note that 
 is printed immediately and no AppArmor denials are present. 
(Looking at the lxd code in lxd/container_exec.go, openpty(3) is only called in 
interactive mode)
  $ lxc exec x --mode=non-interactive -- tcpdump -i eth0
  ...

  Applications that manually call fflush(stdout) are not affected by
  this as manually flushing stdout works fine. The problem seems to be
  caused by glibc not being able to fstat() the /dev/pts/12 fd from the
  host's namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1641236/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1641236] Re: Confined processes inside container cannot fully access host pty device passed in by lxc exec

2022-10-26 Thread Trent Lloyd
The above analysis is true for SSH, but, I realise it's different for
the PTY passed in by lxc exec.

So my analysis is true maybe, but I am going to move this SSH fix over
to Bug #1667016 so this bug can stay open for the general PTY/buffering
issue.

There is a gap in my explanation of: It's not clear to me why this
doesn't also happen outside of a container.

Of note I found that the error I get initially suggests it couldn't resolve the 
path of the FD, which seems probably to be /dev/pts:
[ 9119.221342] audit: type=1400 audit(1666766810.741:452): apparmor="DENIED" 
operation="getattr" info="Failed name lookup - disconnected path" error=-13 
namespace="root//lxd-juju-5062b7-2-lxd-3_" 
profile="/usr/sbin/tcpdump" name="apparmor/.null" pid=257511 comm="tcpdump" 
requested_mask="r" denied_mask="r" fsuid=1000108 ouid=0

However the same fix makes this go away. Is apparmor or this error
message failing to identify the path for some reason because it has no
permission to stat it in that apparmor context or something? Also is
"/dev r" a faulty permission?


It's notable that after I reload the apparmor profile, and sometimes
randomly, the current terminal session has this issue go away - it seems
it can resolve the path for a while. e.g. if i add and then remove the
consoles abstraction, it suddenly works inside that session. But if I
logout/login it breaks again.

I'm a bit lost here :)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apparmor in Ubuntu.
https://bugs.launchpad.net/bugs/1641236

Title:
  Confined processes inside container cannot fully access host pty
  device passed in by lxc exec

Status in apparmor package in Ubuntu:
  Confirmed
Status in lxd package in Ubuntu:
  Invalid
Status in tcpdump package in Ubuntu:
  Confirmed

Bug description:
  Now that AppArmor policy namespaces and profile stacking is in place,
  I noticed odd stdout buffering behavior when running confined
  processes via lxc exec. Much more data stdout data is buffered before
  getting flushed when the program is confined by an AppArmor profile
  inside of the container.

  I see that lxd is calling openpty(3) in the host environment, using
  the returned fd as stdout, and then executing the command inside of
  the container. This results in an AppArmor denial because the file
  descriptor returned by openpty(3) originates outside of the namespace
  used by the container.

  The denial is likely from glibc calling fstat(), from inside the
  container, on the file descriptor associated with stdout to make a
  decision on how much buffering to use. The fstat() is denied by
  AppArmor and glibc ends up handling the buffering differently than it
  would if the fstat() would have been successful.

  Steps to reproduce (using an up-to-date 16.04 amd64 VM):

  Create a 16.04 container
  $ lxc launch ubuntu-daily:16.04 x

  Run tcpdump in one terminal and generate traffic in another terminal (wget 
google.com)
  $ lxc exec x -- tcpdump -i eth0
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
  
  47 packets captured
  48 packets received by filter
  1 packet dropped by kernel
  

  Note that everything above  was printed immediately
  because it was printed to stderr. , which is printed to
  stdout, was not printed until you pressed ctrl-c and the buffers were
  flushed thanks to the program terminating. Also, this AppArmor denial
  shows up in the logs:

  audit: type=1400 audit(1478902710.025:440): apparmor="DENIED"
  operation="getattr" info="Failed name lookup - disconnected path"
  error=-13 namespace="root//lxd-x_"
  profile="/usr/sbin/tcpdump" name="dev/pts/12" pid=15530 comm="tcpdump"
  requested_mask="r" denied_mask="r" fsuid=165536 ouid=165536

  Now run tcpdump unconfined and take note that  is printed 
immediately, before you terminate tcpdump. Also, there are no AppArmor denials.
  $ lxc exec x -- aa-exec -p unconfined -- tcpdump -i eth0
  ...

  Now run tcpdump confined but in lxc exec's non-interactive mode and note that 
 is printed immediately and no AppArmor denials are present. 
(Looking at the lxd code in lxd/container_exec.go, openpty(3) is only called in 
interactive mode)
  $ lxc exec x --mode=non-interactive -- tcpdump -i eth0
  ...

  Applications that manually call fflush(stdout) are not affected by
  this as manually flushing stdout works fine. The problem seems to be
  caused by glibc not being able to fstat() the /dev/pts/12 fd from the
  host's namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1641236/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1920640] Re: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key (2016)

2021-03-21 Thread Trent Lloyd
Just to make the current status clear from what I can gather:

- The GPG key was extended by 1 year to 2022-03-21

- On Ubuntu Bionic (18.04) and newer the GPG key is normally installed
by the ubuntu-dbgsym-keyring package (on 18.04 Bionic onwards). This
package is not yet updated. An update to this package is required and
still pending.

- On Ubuntu Xenial (16.04) users typically imported the key from
keyserver.ubuntu.com. As that is not yet updated, you will need to
import the key from HTTP using the workaround below which will work as a
temporary workaround on all Ubuntu releases. Once keyserver.ubuntu.com
is updated, you could also use "sudo apt-key adv --keyserver
keyserver.ubuntu.com --recv-keys
F2EDC64DC5AEE1F6B9C621F0C8CAB6595FDFF622"

- The updated GPG key is not currently published to keyserver.ubuntu.com

- The updated GPG key is available at http://ddebs.ubuntu.com/dbgsym-
release-key.asc

- As a workaround you can import that key to apt using "wget -O -
http://ddebs.ubuntu.com/dbgsym-release-key.asc | sudo apt-key add -"
(note: you need a space between the -O and -, contrary to the previously
pasted comment)

- I believe that the key likely needs to be extended longer and
published to all resources including the ubuntu-dbgsym-keyring package
and keyserver.ubuntu.com

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to ubuntu-keyring in Ubuntu.
https://bugs.launchpad.net/bugs/1920640

Title:
  EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic
  Signing Key (2016) 

Status in ubuntu-keyring package in Ubuntu:
  Confirmed

Bug description:
  The public key used by the debugging symbols repository
  /usr/share/keyrings/ubuntu-dbgsym-keyring.gpg from the package ubuntu-
  dbgsym-keyring expired.

  $ apt policy ubuntu-dbgsym-keyring
  ubuntu-dbgsym-keyring:
    Installed: 2020.02.11.2
    Candidate: 2020.02.11.2
    Version table:
   *** 2020.02.11.2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
  500 http://archive.ubuntu.com/ubuntu focal/main i386 Packages
  100 /var/lib/dpkg/status
  $ gpg --no-default-keyring --keyring 
/usr/share/keyrings/ubuntu-dbgsym-keyring.gpg --list-keys
  /usr/share/keyrings/ubuntu-dbgsym-keyring.gpg
  -
  pub   rsa4096 2016-03-21 [SC] [expired: 2021-03-20]
    F2EDC64DC5AEE1F6B9C621F0C8CAB6595FDFF622
  uid   [ expired] Ubuntu Debug Symbol Archive Automatic Signing Key 
(2016) 

  
  Error message on "apt update":

  E: The repository 'http://ddebs.ubuntu.com bionic-updates Release' is not 
signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.
  W: GPG error: http://ddebs.ubuntu.com bionic Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
  E: The repository 'http://ddebs.ubuntu.com bionic Release' is not signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.
  W: GPG error: http://ddebs.ubuntu.com bionic-proposed Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
  E: The repository 'http://ddebs.ubuntu.com bionic-proposed Release' is not 
signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-keyring/+bug/1920640/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1920640] Re: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key (2016)

2021-03-21 Thread Trent Lloyd
Updated the following wiki pages:
https://wiki.ubuntu.com/Debug%20Symbol%20Packages
https://wiki.ubuntu.com/DebuggingProgramCrash

With the note:
Note: The GPG key expired on 2021-03-21 and may need updating by either 
upgrading the ubuntu-dbgsym-keyring package or re-running the apt-key command. 
Please see Bug #1920640 for workaround details if that does not work.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to ubuntu-keyring in Ubuntu.
https://bugs.launchpad.net/bugs/1920640

Title:
  EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic
  Signing Key (2016) 

Status in ubuntu-keyring package in Ubuntu:
  Confirmed

Bug description:
  The public key used by the debugging symbols repository
  /usr/share/keyrings/ubuntu-dbgsym-keyring.gpg from the package ubuntu-
  dbgsym-keyring expired.

  $ apt policy ubuntu-dbgsym-keyring
  ubuntu-dbgsym-keyring:
    Installed: 2020.02.11.2
    Candidate: 2020.02.11.2
    Version table:
   *** 2020.02.11.2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
  500 http://archive.ubuntu.com/ubuntu focal/main i386 Packages
  100 /var/lib/dpkg/status
  $ gpg --no-default-keyring --keyring 
/usr/share/keyrings/ubuntu-dbgsym-keyring.gpg --list-keys
  /usr/share/keyrings/ubuntu-dbgsym-keyring.gpg
  -
  pub   rsa4096 2016-03-21 [SC] [expired: 2021-03-20]
    F2EDC64DC5AEE1F6B9C621F0C8CAB6595FDFF622
  uid   [ expired] Ubuntu Debug Symbol Archive Automatic Signing Key 
(2016) 

  
  Error message on "apt update":

  E: The repository 'http://ddebs.ubuntu.com bionic-updates Release' is not 
signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.
  W: GPG error: http://ddebs.ubuntu.com bionic Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
  E: The repository 'http://ddebs.ubuntu.com bionic Release' is not signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.
  W: GPG error: http://ddebs.ubuntu.com bionic-proposed Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
  E: The repository 'http://ddebs.ubuntu.com bionic-proposed Release' is not 
signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-keyring/+bug/1920640/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1920640] Re: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key (2016)

2021-03-21 Thread Trent Lloyd
** Changed in: ubuntu-keyring (Ubuntu)
   Importance: Undecided => Critical

** Changed in: ubuntu-keyring (Ubuntu)
   Importance: Critical => High

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to ubuntu-keyring in Ubuntu.
https://bugs.launchpad.net/bugs/1920640

Title:
  EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic
  Signing Key (2016) 

Status in ubuntu-keyring package in Ubuntu:
  Confirmed

Bug description:
  The public key used by the debugging symbols repository
  /usr/share/keyrings/ubuntu-dbgsym-keyring.gpg from the package ubuntu-
  dbgsym-keyring expired.

  $ apt policy ubuntu-dbgsym-keyring
  ubuntu-dbgsym-keyring:
    Installed: 2020.02.11.2
    Candidate: 2020.02.11.2
    Version table:
   *** 2020.02.11.2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
  500 http://archive.ubuntu.com/ubuntu focal/main i386 Packages
  100 /var/lib/dpkg/status
  $ gpg --no-default-keyring --keyring 
/usr/share/keyrings/ubuntu-dbgsym-keyring.gpg --list-keys
  /usr/share/keyrings/ubuntu-dbgsym-keyring.gpg
  -
  pub   rsa4096 2016-03-21 [SC] [expired: 2021-03-20]
    F2EDC64DC5AEE1F6B9C621F0C8CAB6595FDFF622
  uid   [ expired] Ubuntu Debug Symbol Archive Automatic Signing Key 
(2016) 

  
  Error message on "apt update":

  E: The repository 'http://ddebs.ubuntu.com bionic-updates Release' is not 
signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.
  W: GPG error: http://ddebs.ubuntu.com bionic Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
  E: The repository 'http://ddebs.ubuntu.com bionic Release' is not signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.
  W: GPG error: http://ddebs.ubuntu.com bionic-proposed Release: The following 
signatures were invalid: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive 
Automatic Signing Key (2016) 
  E: The repository 'http://ddebs.ubuntu.com bionic-proposed Release' is not 
signed.
  N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
  N: See apt-secure(8) manpage for repository creation and user configuration 
details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-keyring/+bug/1920640/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1806012] Re: set-cpufreq: 'powersave' governor configuration sanity on ubuntu server

2019-04-02 Thread Trent Lloyd
Something I was not previously aware of that informs this a bit more, is
that in some BIOS modes (apparently HP uses this extensively, unsure
about Dell and others) you get a "Collaborative Power Control" mode,
which sets the scaling_driver to pcc-cpufreq (as opposed to cpufreq) and
is some weird hybrid of OS+BIOS defined behavior.

In the case of these collaborative modes, the exact behavior is probably
wildly different based on what the BIOS is doing and likely would
explain why we get weird and inconsistent performance behavior. Unclear
to me if such BIOS modes will still use intel_pstate or not.. something
I'd have to look into. Or whether it's specific to pre-pstate.

Some more information about collaborative mode in this bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1447763

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1806012

Title:
  set-cpufreq: 'powersave' governor configuration sanity on ubuntu
  server

Status in systemd package in Ubuntu:
  In Progress
Status in systemd source package in Xenial:
  In Progress
Status in systemd source package in Bionic:
  In Progress
Status in systemd source package in Cosmic:
  In Progress
Status in systemd source package in Disco:
  In Progress

Bug description:
  Whilst debugging 'slow instance performance' on a Ubuntu Bionic based
  cloud, I observed that the default cpu governor configuration was set
  to 'powersave'; toggling this to 'performance' (while in not anyway a
  particularly green thing todo) resulted in the instance slowness
  disappearing and the cloud performance being as expected (based on a
  prior version of the deploy on Ubuntu Xenial).

  AFAICT Xenial does the same thing albeit in a slight different way,
  but we definitely did not see the same performance laggy-ness under a
  Xenial based cloud.

  Raising against systemd (as this package sets the governor to
  'powersave') - I feel that the switch to 'performance' although
  appropriate then obscures what might be a performance/behavioural
  difference in the underlying kernel when a machine is under load.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: systemd 237-3ubuntu10.9
  ProcVersionSignature: Ubuntu 4.15.0-39.42-generic 4.15.18
  Uname: Linux 4.15.0-39-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.5
  Architecture: amd64
  Date: Fri Nov 30 10:05:46 2018
  Lsusb:
   Bus 002 Device 002: ID 8087:8002 Intel Corp. 
   Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
   Bus 001 Device 003: ID 413c:a001 Dell Computer Corp. Hub
   Bus 001 Device 002: ID 8087:800a Intel Corp. 
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  MachineType: Dell Inc. PowerEdge R630
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=C.UTF-8
   SHELL=/bin/bash
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-39-generic 
root=UUID=a361a524-47eb-46c3-8a04-e5eaa65188c9 ro hugepages=103117 iommu=pt 
intel_iommu=on
  SourcePackage: systemd
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 11/08/2016
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: 2.3.4
  dmi.board.name: 02C2CP
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A03
  dmi.chassis.type: 23
  dmi.chassis.vendor: Dell Inc.
  dmi.modalias: 
dmi:bvnDellInc.:bvr2.3.4:bd11/08/2016:svnDellInc.:pnPowerEdgeR630:pvr:rvnDellInc.:rn02C2CP:rvrA03:cvnDellInc.:ct23:cvr:
  dmi.product.name: PowerEdge R630
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1806012/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1806012] Re: set-cpufreq: 'powersave' governor configuration sanity on ubuntu server

2019-04-02 Thread Trent Lloyd
(as context to this information, apparently this particularly bad
performance experienced with 'powersave' happens when the BIOS power
control is set to the default, and goes away when in the BIOS you set
power management to 'os control' - so there is some additional
information needed to determine why this particular case offers bad
performance, when as shown below, powersave/performance governors should
not normally present more than a few percent performance difference)

I would have not expected the governor choice (powersave or otherwise)
to limit performance so severely as to prevent a VM from booting/working
usefully. I would expect the frequency governor settings to see make a
difference in benchmarks and power usage, not general interactive
performance. The phoronix data referred to later supports that view (the
performance difference is minimal generally). The behavior you
experienced is really a bug in my view.

On modern Intel CPUs (Sandy Bridge and newer, many 2011/2012+ models but
varies depending on the exact CPU) the Intel "Pstate" driver is used
which is significantly different to the older "cpufreq" driver. This is
important to note as you have the two different drivers in use based on
which CPU you have - rather than OS (Xenial/Bionic use the same
settings).

Although both drivers have governor modes with the name "powersave" and
"performance" they are similar in name only and their behavior is quite
different and they do not share any code. To that end you may find
different behavior between some kind of test/lab environment which is
not unlikely to have much older hardware and current new hardware FCBs.
It would also be good to know for this specific badly broken system
which scaling_driver was in use and what the precise processor model
information from /proc/cpuinfo.

This article from Phoronix was released recently which compares the performance 
with various different benchmarks as well as power-usage of the various driver 
and governor mode combinations (it's a good read separately)
https://www.phoronix.com/scan.php?page=article&item=linux50-pstate-cpufreq

It has a few interesting observations. In the majority of benchmarks the
performance between the two is very similar, and in fact the p-state
powersave governor is slightly faster (!) than the pstate performance
governor in many of the tests by a small margin. Another major
observation from the phoronix data is that the CPUfreq-powersave
governor is VERY significantly slower, by a factor of 4-5 times in most
cases.

While the *cpufreq*-powersave (which remember, is different to the
intel_pstate-powersave governor, which should be used) governor is very
slow, it should also not be used by default on any Xenial or Bionic
system from what I can see unless I am missing another script/tool that
is changing the governor somewhere (I couldn't see any scripts in the
charm or qemu packages that do so). If we read the code of the systemd
service on bionic to set the CPU scheduler, we find that the script
/lib/systemd/set-cpufreq (which is an Ubuntu/Debian addition, not
systemd upstream, xenial uses more or less the same script at
/etc/init.d/ondemand) it is quite simple and will basically prefer
"interactive", "ondemand" and "powersave" in that order if they exist.
This should result in non-pstate systems using ondemand (correct) and
pstate systems using powersave (also correct). So it seems the bug is
most likely not in the script, but some strange interaction with the
BIOS that needs to be investigated further.

To that end if anyone with an affected system with noticeably worse performance 
in powersave/ondemand, I'd love to either get access or see the following data:
 - Output of "sudo dmidecode"
 - A copy of /sys/devices/system/cpu/cpufreq (tar is fine) with particular 
attention to the values of scaling_driver and scaling_governor 
 - A basic CPU benchmark, run under 'powertop' to collect information on the 
frequency and idle states. You can run that like so "sudo powertop -C test.csv 
-r test.html  --workload=/usr/bin/updatedb" - it will output a CSV and HTML 
file with the data. But we probably need a better benchmark that updatedb and 
I'm not 100% sure off the top of my head what we could use - I suggest we could 
try a few things on a "broken" machine to figure out benchmark we can use that 
reflects the poor performance - updatedb may well do the job but it's not what 
I'd actually suggest we use.

Ideally we would collect this information in a matrix of all 4
combinations of: CPU Governor (Default Ubuntu, User Optimised) and BIOS
setting (OS Control, BIOS Default) so we can understand why we get the
pathologically bad performance in some cases of BIOS Default +
powersave.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1806012

Title:
  set-cpufreq: 'powersave' governor configuration sanity on ubuntu

[Touch-packages] [Bug 1806012] Re: set-cpufreq: 'powersave' governor configuration sanity on ubuntu server

2019-04-02 Thread Trent Lloyd
Confirmed that pcc-cpufreq *can* be used in preference to intel_pstate
even on a CPU that supports intel_pstate if the firmware tables are
setup to request such. One such server is an E5-2630 v3 HP DL360 G9
(shuckle).

On the default "dynamic" firmware setting you get driver=pcc-cpufreq +
governor=ondemand, with the "OS Control" setting you get
driver=intel_pstate + governor=powersave.

As above this would explain why the very poor performance is only seen
without "OS Control" set, and then, only on some hardware. Since the
firmware is in control of the CPU power states in pcc-cpufreq mode the
exact frequencies / the rate they are changed / etc are partly under
BIOS control. Secondarily it's using an entirely separate kernel path
for when and how to choose these frequencies.

Note that when pcc-cpufreq is in use the startup script
(xenial:/etc/init.d/ondemand, bionic:/lib/systemd/set-cpufreq) will use
ondemand and not powersave (contrary to what the bug report description
states). If a system using 'cpufreq' is somehow getting the powersave
governor set, this is a bug, but I haven't seen any case where that
would be true as of yet.

Also note that in Xenial, the ondemand script runs "sleep 60" before
setting the governor, apparently to let most desktops boot to the login
screen. So any method that tries to override this setting may fail on
Xenial if it runs before the 60 seconds is up (e.g. /etc/rc.local, an
init script, sysctl, etc)

I did find that we have 1 other method of setting the governor, which is
a charm ~canonical-bootstack/sysconfig which had an option added to
allow setting the governor to performance (though it doesn't default to
that). This charm installs the cpufrequtils package which also seems to
default to 'ondemand'. However if this charm was configured with
governor=powersave on such a cpufreq system, we would expect very poor
performance. Secondly when configured with governor=performance on
Xenial it runs before the 'ondemand' script finishes its 60 second wait,
so the change gets reverted. But it will work when first deployed if no
reboot is done. (Bug: https://bugs.launchpad.net/bootstack-
ops/+bug/1822774)


To my mind this leaves two remaining questions:
 - Are we ever getting into a state where we have scaling-driver=pcc-cpufreq or 
acpi-cpufreq, but governor=powersave. Such a case is likely a bug. I haven't 
found any such case as yet unless someone deployed the sysconfig charm with 
governor=powersave explicitly set (which I have not ruled out)

 - Is there some specific hardware where scaling-driver=pcc-cpufreq and
scaling-governor=ondemand performs poorly. I have yet to run a benchmark
on my example hardware to find out.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1806012

Title:
  set-cpufreq: 'powersave' governor configuration sanity on ubuntu
  server

Status in systemd package in Ubuntu:
  In Progress
Status in systemd source package in Xenial:
  In Progress
Status in systemd source package in Bionic:
  In Progress
Status in systemd source package in Cosmic:
  In Progress
Status in systemd source package in Disco:
  In Progress

Bug description:
  Whilst debugging 'slow instance performance' on a Ubuntu Bionic based
  cloud, I observed that the default cpu governor configuration was set
  to 'powersave'; toggling this to 'performance' (while in not anyway a
  particularly green thing todo) resulted in the instance slowness
  disappearing and the cloud performance being as expected (based on a
  prior version of the deploy on Ubuntu Xenial).

  AFAICT Xenial does the same thing albeit in a slight different way,
  but we definitely did not see the same performance laggy-ness under a
  Xenial based cloud.

  Raising against systemd (as this package sets the governor to
  'powersave') - I feel that the switch to 'performance' although
  appropriate then obscures what might be a performance/behavioural
  difference in the underlying kernel when a machine is under load.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: systemd 237-3ubuntu10.9
  ProcVersionSignature: Ubuntu 4.15.0-39.42-generic 4.15.18
  Uname: Linux 4.15.0-39-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.5
  Architecture: amd64
  Date: Fri Nov 30 10:05:46 2018
  Lsusb:
   Bus 002 Device 002: ID 8087:8002 Intel Corp. 
   Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
   Bus 001 Device 003: ID 413c:a001 Dell Computer Corp. Hub
   Bus 001 Device 002: ID 8087:800a Intel Corp. 
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  MachineType: Dell Inc. PowerEdge R630
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=C.UTF-8
   SHELL=/bin/bash
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-39-generic 
root=UUID=a361a524-47eb-46c3-8a04-e5eaa65188c9 ro hugepages=103117 iommu=pt 
intel_iommu=on
  SourcePacka

[Touch-packages] [Bug 1804576] [NEW] MaxJobTime=0 results in jobs being cancelled immediately instead of never

2018-11-21 Thread Trent Lloyd
Public bug reported:

When using CUPS filters, these filters can take a few seconds to
complete.

In this case no documents are allowed to be lost on printing failures,
so we used to set "MaxJobTime 0" in cupsd.conf which worked on Ubuntu
14.04.

With cups on 18.04, you get the following message in /var/log/cups/error_log 
whenever the filter takes a little longer:
I [12/Nov/2018:14:43:26 +0100] [Job 18] Canceling stuck job after 0 seconds.

Then, the job is deleted and lost.

"MaxJobTime 0" is documented as "indefinite wait", but apparently cups
treats is as "wait almost not at all".

This issue appears to have also been filed upstream:
https://github.com/apple/cups/issues/5438

Temporary workaround is to set the MaxJobTime to a very large value
instead (e.g. 3 years)

** Affects: cups
 Importance: Unknown
 Status: Unknown

** Affects: cups (Ubuntu)
 Importance: Medium
 Status: Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1804576

Title:
  MaxJobTime=0 results in jobs being cancelled immediately instead of
  never

Status in CUPS:
  Unknown
Status in cups package in Ubuntu:
  Confirmed

Bug description:
  When using CUPS filters, these filters can take a few seconds to
  complete.

  In this case no documents are allowed to be lost on printing failures,
  so we used to set "MaxJobTime 0" in cupsd.conf which worked on Ubuntu
  14.04.

  With cups on 18.04, you get the following message in /var/log/cups/error_log 
whenever the filter takes a little longer:
  I [12/Nov/2018:14:43:26 +0100] [Job 18] Canceling stuck job after 0 seconds.

  Then, the job is deleted and lost.

  "MaxJobTime 0" is documented as "indefinite wait", but apparently cups
  treats is as "wait almost not at all".

  This issue appears to have also been filed upstream:
  https://github.com/apple/cups/issues/5438

  Temporary workaround is to set the MaxJobTime to a very large value
  instead (e.g. 3 years)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cups/+bug/1804576/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1804576] Re: MaxJobTime=0 results in jobs being cancelled immediately instead of never

2018-11-21 Thread Trent Lloyd
I enabled "debug2" log level and tested again. I found the following entries in 
the log:
d [20/Nov/2018:08:22:25 +0100] cupsdCheckJobs: 1 active jobs, sleeping=0, 
ac-power=-1, reload=0, curtime=1542698545
d [20/Nov/2018:08:22:25 +0100] cupsdCheckJobs: Job 75 - dest="hvitcpdf", 
printer=(nil), state=3, cancel_time=0, hold_until=1542698845, kill_time=0, 
pending_cost=0, pending_timeout=0
[...]
d [20/Nov/2018:08:22:35 +0100] cupsdCheckJobs: 1 active jobs, sleeping=0, 
ac-power=-1, reload=0, curtime=1542698555
d [20/Nov/2018:08:22:35 +0100] cupsdCheckJobs: Job 75 - dest="hvitcpdf", 
printer=0x55faadd92e00, state=5, cancel_time=1542698545, hold_until=1542698845, 
kill_time=0, pending_cost=0, pending_timeout=0

In the first run of cupsdCheckJobs, the cancel_time of the job is zero,
which means to wait indefinitely as expected. In the second run, the
cancel_time has been updated to the time of the creation of the job
which is wrong.

There is now an individual cancellation time for each job, which is initialized 
to MaxJobTime if it's not explicitely set. See printers.c around line 3450:
---
if (!cupsGetOption("job-cancel-after", p->num_options, p->options))
ippAddInteger(p->attrs, IPP_TAG_PRINTER, IPP_TAG_INTEGER,
"job-cancel-after-default", MaxJobTime);

---
So if MaxJobTime is set to 0 - which means never to kill the job - the default 
for job-cancel-after is set to 0 - which means to kill the job immediately. So 
I guess there is a missing check for the special value of 0.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1804576

Title:
  MaxJobTime=0 results in jobs being cancelled immediately instead of
  never

Status in CUPS:
  Unknown
Status in cups package in Ubuntu:
  Confirmed

Bug description:
  When using CUPS filters, these filters can take a few seconds to
  complete.

  In this case no documents are allowed to be lost on printing failures,
  so we used to set "MaxJobTime 0" in cupsd.conf which worked on Ubuntu
  14.04.

  With cups on 18.04, you get the following message in /var/log/cups/error_log 
whenever the filter takes a little longer:
  I [12/Nov/2018:14:43:26 +0100] [Job 18] Canceling stuck job after 0 seconds.

  Then, the job is deleted and lost.

  "MaxJobTime 0" is documented as "indefinite wait", but apparently cups
  treats is as "wait almost not at all".

  This issue appears to have also been filed upstream:
  https://github.com/apple/cups/issues/5438

  Temporary workaround is to set the MaxJobTime to a very large value
  instead (e.g. 3 years)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cups/+bug/1804576/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1804576] Re: MaxJobTime=0 results in jobs being cancelled immediately instead of never

2018-11-21 Thread Trent Lloyd
** Bug watch added: github.com/apple/cups/issues #5438
   https://github.com/apple/cups/issues/5438

** Also affects: cups via
   https://github.com/apple/cups/issues/5438
   Importance: Unknown
   Status: Unknown

** Description changed:

  When using CUPS filters, these filters can take a few seconds to
  complete.
  
  In this case no documents are allowed to be lost on printing failures,
  so we used to set "MaxJobTime 0" in cupsd.conf which worked on Ubuntu
  14.04.
  
- With cups on 18.04, you get the following message in /var/log/cups/error_log 
whenever the filter takes a little longer: 
- I [12/Nov/2018:14:43:26 +0100] [Job 18] Canceling stuck job after 0 seconds. 
+ With cups on 18.04, you get the following message in /var/log/cups/error_log 
whenever the filter takes a little longer:
+ I [12/Nov/2018:14:43:26 +0100] [Job 18] Canceling stuck job after 0 seconds.
  
  Then, the job is deleted and lost.
  
  "MaxJobTime 0" is documented as "indefinite wait", but apparently cups
  treats is as "wait almost not at all".
  
  This issue appears to have also been filed upstream:
  https://github.com/apple/cups/issues/5438
+ 
+ Temporary workaround is to set the MaxJobTime to a very large value
+ instead (e.g. 3 years)

** Changed in: cups (Ubuntu)
   Status: New => Confirmed

** Changed in: cups (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1804576

Title:
  MaxJobTime=0 results in jobs being cancelled immediately instead of
  never

Status in CUPS:
  Unknown
Status in cups package in Ubuntu:
  Confirmed

Bug description:
  When using CUPS filters, these filters can take a few seconds to
  complete.

  In this case no documents are allowed to be lost on printing failures,
  so we used to set "MaxJobTime 0" in cupsd.conf which worked on Ubuntu
  14.04.

  With cups on 18.04, you get the following message in /var/log/cups/error_log 
whenever the filter takes a little longer:
  I [12/Nov/2018:14:43:26 +0100] [Job 18] Canceling stuck job after 0 seconds.

  Then, the job is deleted and lost.

  "MaxJobTime 0" is documented as "indefinite wait", but apparently cups
  treats is as "wait almost not at all".

  This issue appears to have also been filed upstream:
  https://github.com/apple/cups/issues/5438

  Temporary workaround is to set the MaxJobTime to a very large value
  instead (e.g. 3 years)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cups/+bug/1804576/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1799265] Re: avahi-daemon high cpu, unusable networking

2018-12-17 Thread Trent Lloyd
** Changed in: avahi (Ubuntu)
   Status: Incomplete => Triaged

** Changed in: avahi (Ubuntu)
 Assignee: (unassigned) => Trent Lloyd (lathiat)

** Bug watch added: github.com/lathiat/avahi/issues #210
   https://github.com/lathiat/avahi/issues/210

** Also affects: avahi via
   https://github.com/lathiat/avahi/issues/210
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1799265

Title:
  avahi-daemon high cpu, unusable networking

Status in Avahi:
  Unknown
Status in avahi package in Ubuntu:
  Triaged

Bug description:
  Currently running Kubuntu 18.10, Dell XPS 13 9350

  Since updating from Kubuntu 18.04 to 18.10, the avahi-daemon has been
  consistently hampering network performance and using CPU for long
  periods of time.

  When booting machine from off state, avahi-daemon uses an entire CPU
  at max load for approx 10 minutes. During this time, internet
  connectivity via wifi is essentially unusable. The wifi connection is
  good, but it seems that http transactions are cutoff mid-way so no
  webpage is able to load.

  When waking from sleep, the avahi-daemon causes similar symptoms, but
  with less than 1 full cpu usage, and with somewhat less degraded
  network performance, but still quite unusable.

  I have never had issues with avahi prior to the 18.10 upgrade, so I am
  fairly confident the issue is rooted in that change.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.10
  Package: avahi-daemon 0.7-4ubuntu2
  ProcVersionSignature: Ubuntu 4.18.0-10.11-generic 4.18.12
  Uname: Linux 4.18.0-10-generic x86_64
  ApportVersion: 2.20.10-0ubuntu13
  Architecture: amd64
  CurrentDesktop: KDE
  Date: Mon Oct 22 10:00:34 2018
  InstallationDate: Installed on 2017-07-24 (455 days ago)
  InstallationMedia: Ubuntu 17.04 "Zesty Zapus" - Release amd64 (20170412)
  ProcEnviron:
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   LD_PRELOAD=
   SHELL=/bin/bash
  SourcePackage: avahi
  UpgradeStatus: Upgraded to cosmic on 2018-10-20 (2 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/avahi/+bug/1799265/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1409872] Re: avahi-0.6.31 doesn't pass Apple conformance test

2018-12-17 Thread Trent Lloyd
** Bug watch added: github.com/lathiat/avahi/issues #2
   https://github.com/lathiat/avahi/issues/2

** Also affects: avahi via
   https://github.com/lathiat/avahi/issues/2
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1409872

Title:
  avahi-0.6.31 doesn't pass Apple conformance test

Status in Avahi:
  Unknown
Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  We are working on to support Apple bonjour conformance test-1.3.0 and
  test fails with avahi version 0.6.31 for test case - SRV
  PROBING/ANNOUNCEMENTS. This test fails both on IPV4 and IPV6. And the
  configured network package - dhcpcd-6.6.4.

  After parsing all logs(wireshark, apple PC and linux PC syslogs), and
  looks like avahi does not support a particular scenario in which Apple
  bonjour conformance test looks for. And also confirmed Apple test is
  inline with the RFC 6762 document for a special use-case(resolving SRV
  names on power up).

  Below is the bug description,

  setup:
  Apple MAC with Bonjour conformance test - 1.3.0 (latest OS x)
  Apple airport  (latest version)
  Linux device(PC) (ubuntu 14.04)

  Configure all above devices to communicate on link local mode.

  1) Start avahi bonjour conformance test on APPLE PC and Power ON linux
  device with avahi-0.6.31 and with _ssh._tcp.local service file

  2) First Linux device sends SRV initial probe message on link and followed by 
apple test sends same SRV (Linux device) question on link,
 example:(commands on wire shark)
  Linux Device sends ->  Who has "SSH" SRV QM question?
  Apple Bonjour Conformance Test -> Who has "SSH" SRV QM question?

  3) Then after this there is no message from Linux device on network and Apple 
test expecting new SRV probe message from device.
And so conformance test failed, since device couldn't able to send new 
SRV probe message with new name for service "SSH"

  4) After parsing log files found that,
 avahi-daemon logged service with new name ("SSH #2") in log file and could 
not publish/probe SRV message on network.

Linux device syslog messages,
Loading service file /etc/avahi/services/ssh.service
Service name conflict for "SSH" (/etc/avahi/services/ssh.service), 
retrying with "SSH #2).

To manage notifications about this bug go to:
https://bugs.launchpad.net/avahi/+bug/1409872/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1830516] Re: Avahi Printer advertisements are wrong (Avahi or CUPS?)

2019-05-26 Thread Trent Lloyd
This is a bug in CUPS ultimately, it's driving Avahi using the D-BUS API
(as opposed to manual service files in /etc/avahi/services, this is only
really used for a sysadmin to manually add services, most other types of
advertisements such as printers are expected to use the API to advertise
it).

Marking this against cups instead, you may also wish to consider filing it 
upstream:
https://github.com/apple/cups

** Also affects: cups (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: avahi (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1830516

Title:
  Avahi Printer advertisements are wrong (Avahi or CUPS?)

Status in avahi package in Ubuntu:
  Invalid
Status in cups package in Ubuntu:
  New

Bug description:
  First I do not know if this is a CUPS issue or an Avahi issue.

  I do not know whether Avahi or CUPS generates the files nor where they
  are. They are not in /etc/avahi/services, where I expected to find
  them, so they must lie somewhere in CUPS.

  I have a printer installed in CUPS. I must use the driver from a
  different model to make it work. Among other things I want avahi and
  CUPS to show the printer name, consistently unless related to the
  print driver used.

  avahi-browse shows

  Xerox_Phaser-6125 @ Hostname (correct)

  Later however in Txt fields avahi-browse shows "product=(DocuPrint
  C525 A-AP)" .. "ty=FX DocuPrint C525 A-AP v1.0" (incorrect
  printer model but is the driver in use. Those fields should relate to
  Product and type, not driver)

  The problem above I suppose is the "trickle down effect" from how CUPS
  names the printer. In the CUPS web GUI I see

   Queue Name Description   LocationMake and Model  Status
  Xerox_Phaser-6125 Xerox_Phaser-6125   HomeFX DocuPrint C525 A-AP 
v1.0 Idle - "Waiting for printer to finish."

  The Make and Model is not Make and model at all, rather it comes from
  the driver used. I know it is commonplace to use say an HP LaserJet PS
  driver when you want PostScript , like back in the day when I printed
  to a LaserWriter, so I am not the only one that sees this.

  Also, I think there should be a proper "representation" in the txt
  fields as well icon as is done with AirScan/eSCL scanners
  advertisements "representation=http://HOSTNAME./images/Icon.png"; as I
  believe some apps will use this icon of the actual printer. This field
  should not be required but optional. These days we connect many
  different OSs to Linux and expect Linux to "do it all", and we still
  want it to look pretty and correct. I know Apple in some cases uses
  the "representation" field. This "representation" field too would have
  to tricke down from CUPS to Avahi, as Avahi would only point to the
  file at http://HOSTNAME:631/images/Icon.png. So it needs to hosted on
  the CUPS web GUI.

  I also recently read that some distros are moving away from the GUI
  utilities to configure printers in favor of the CUPS web GUI , making
  this more relevant than ever.

  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1830516/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1830531] Re: .local domain look ups do not trigger name publication

2019-05-26 Thread Trent Lloyd
The most likely cause for this is that you have packets outbound on port
5353 firewalled either locally or on your network device. When another
host pings the address, the responses are sent via multicast and Avahi
caches that response, and so can then use it without having to transmit
a packet.

Until someone else looks it up, the cache is empty and the query packet
is blocked, so no resolution happens.

To confirm this, other than checking and clearing both iptables and
nftables, you could try run tcpdump on your host and then on another
host to see if you can see the packet generated locally, and then if it
actually arrives on the network.


Note that once avahi has the address in its cache, when it queries the network 
for it, it includes that address in the packet as  "Known Answer" and the other 
host won't respond. So when testing this you are likely best to restart the 
local avahi daemon to clear it's cache before each test. You can see those 
"Known Answers" in the tcpdump though.

Best viewed with wireshark as it makes decoding the DNS part easy.

As a secondary test, I would suggest trying a different network adapter
(possibly wired instead of wireless, or vica-versa) - you can also get
these problems from weird drivers in some cases. Though usually broken
drivers prevent you from RECEIVING multicast, and so manifests that this
never works, so that seems unlikely in your case, but perhaps not
impossible.

As a third test try 'avahi-resolve-host-name' instead of ping, to
isolate if "avahi" / multicast is the issue, as opposed to the nss-mdns
layer.


Going to mark this Invalid for now, however, if you can show from the packet 
dumps that the query packets are really not being sent at all even locally feel 
free to set the status back to New for further investigation. 

** Changed in: avahi (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1830531

Title:
  .local domain look ups do not trigger name publication

Status in avahi package in Ubuntu:
  Incomplete

Bug description:
  On my local network I find that with 18.04 network name lookups using
  .local domains no not work unless stimulated by a non-18.04 machine
  first. This is observed for ssh, http, https and ping, but probably
  not limited to those.

  To demonstrate this here is a ping example.

  ~ $ ping pi.local
  ping: pi.local: Name or service not known
  ~ $ ping pi.local
  ping: pi.local: Name or service not known

  # At this point head to another machine running 16.04.6 and execute ping 
pi.local.
  # This is immediately successful. Then head back to the 18.04 machine.

  ~ $ ping pi.local
  PING pi.local (192.168.108.28) 56(84) bytes of data.
  64 bytes from pi.this.domain (192.168.108.28): icmp_seq=1 ttl=64 time=4.43 ms
  64 bytes from pi.this.domain (192.168.108.28): icmp_seq=2 ttl=64 time=5.64 ms
  64 bytes from pi.this.domain (192.168.108.28): icmp_seq=3 ttl=64 time=5.98 ms
  64 bytes from pi.this.domain (192.168.108.28): icmp_seq=4 ttl=64 time=5.84 ms
  ^C
  --- pi.local ping statistics ---
  4 packets transmitted, 4 received, 0% packet loss, time 7018ms
  rtt min/avg/max/mdev = 4.435/5.476/5.986/0.613 ms

  After a couple of minutes, the name resolution fails again, but can be
  brought back again by following the procedure above.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: avahi-daemon 0.7-3.1ubuntu1.2
  ProcVersionSignature: Ubuntu 4.18.0-20.21~18.04.1-generic 4.18.20
  Uname: Linux 4.18.0-20-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.9-0ubuntu7.6
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Sun May 26 20:04:58 2019
  InstallationDate: Installed on 2019-05-22 (4 days ago)
  InstallationMedia: Ubuntu 18.04.2 LTS "Bionic Beaver" - Release amd64 
(20190210)
  SourcePackage: avahi
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1830531/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1586528] Re: Avahi-daemon withdraws address record

2019-10-17 Thread Trent Lloyd
The bug was marked invalid for *Avahi* but Confirmed for network-manager
-- because the bug exists not in Avahi (it simply logs the message you
see as a result of the IP address being removed). So the bug is still
valid and confirmed, just for network-manager instead.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1586528

Title:
  Avahi-daemon withdraws address record

Status in avahi package in Ubuntu:
  Invalid
Status in network-manager package in Ubuntu:
  Confirmed

Bug description:
  For some reason, if I leave my Ubuntu VM up for a prolonged period of
  time the machine will lose connection to the network.  ip addr shows
  that the nic port no longer has an address and an examination of the
  syslog shows this:

  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Withdrawing address record 
for 10.0.2.15 on enp0s3.
  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Leaving mDNS multicast 
group on interface enp0s3.IPv4 with address 10.0.2.15.
  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Interface enp0s3.IPv4 no 
longer relevant for mDNS.

  
  for no known reason.

  The only reliable way to get the network to come back (that I have
  found) is a full reboot.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: avahi-daemon 0.6.32~rc+dfsg-1ubuntu2
  ProcVersionSignature: Ubuntu 4.4.0-22.40-generic 4.4.8
  Uname: Linux 4.4.0-22-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Fri May 27 15:11:34 2016
  InstallationDate: Installed on 2015-10-22 (218 days ago)
  InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Release amd64 (20151021)
  SourcePackage: avahi
  UpgradeStatus: Upgraded to xenial on 2016-03-30 (58 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1586528/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1102906] Re: Cannot broadcast both on global and link address on same interface

2019-08-27 Thread Trent Lloyd
Part of the reason this behavior exists in Avahi is that many
applications do not correctly retrieve the scope ID (interface index)
when doing hostname resolution, and if not supplied then connection to
such a link local address will fail. Applications are likely to receive
such an address at random.

Also until very recently nss-mdns didn't actually support passing
through that scope ID, though it now does in the latest versions.

So changing Avahi to always return these link local IPs is much more
likely to break pretty much every other application except Pidgin. To my
mind what Pidgin should do to resolve this issue is to explicitly either
not block based on the incoming IP address, or, bind explicitly to the
IP address that is being advertised to prevent the connection being
sourced from the link local address.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1102906

Title:
  Cannot broadcast both on global and link address on same interface

Status in Avahi:
  Unknown
Status in avahi package in Ubuntu:
  New

Bug description:
  Avahi seems to be hardwired to not over any link-local addresses on an
  interface, if there also exists a global (non-link-local) address on
  the same interface. Unfortunately I have need for this feature.

  I patched the source accordingly myself, creating a new config option
  'publish-local-always' to enable that behavior. It's actually I
  minimal change, and I would be pleased, if you could integrate it (or
  something similar). You can find my patch in the attachment below.

To manage notifications about this bug go to:
https://bugs.launchpad.net/avahi/+bug/1102906/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1102906] Re: Cannot broadcast both on global and link address on same interface

2019-08-30 Thread Trent Lloyd
When I say bind, I actually meant to bind the outgoing connection from
Pidgin (not related to Avahi). So when creating the socket, specify the
source IP address.

The problem is that when you connect (without specifying a source) then
at least for IPv6 due to the routing table specification of the src
address the outgoing connection ends up choosing the source address as
the link local address (which pidgin doesn't know about).

More generally though this method of blocking a connection based on the
source IP is problematic and will also cause problems in other
scenarios, such as for example if a user is connected to the same
network from two network cards (e.g. wired + wireless) - and pidgin
wants to connect to the IP for the interface without the route
preference.

It would perhaps make more sense to do connector authentication using
either a list of all subnets on that interface, i.e. check and ensure
that the connection is from any local subnet on that interface or
otherwise using some kind of public-private key scheme. The "subnet
check" could also be simplified by simply always allowing connections
from the link-local subnet regardless of the IP of the user.

Does Pidgin use the IP address to identify which user is connecting, or
does it do some kind of username authentication in the socket after
connection? Remember that because mDNS itself is very un-authenticated;
you're not really gaining any "security" by the authentication. At best
I imagine it's the most convenient way to map a user to the incoming
connection if theres no metadata of such inside the connection itself.


Otherwise circling back to the original suggestion of changing to always
advertise the link local address. If we wanted to go down that route, I
think first we would need to modify both libnss-mdns and Avahi to re-
order the list of IPs returned during hostname resolution to list the
"closest" IP first; i.e. the IP we are more likely to be able to connect
to - and also to list global ahead of local. So that at least most
applications will get the global IP first and won't need the scope_id so
won't fail once we add this feature.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1102906

Title:
  Cannot broadcast both on global and link address on same interface

Status in Avahi:
  Unknown
Status in avahi package in Ubuntu:
  New

Bug description:
  Avahi seems to be hardwired to not over any link-local addresses on an
  interface, if there also exists a global (non-link-local) address on
  the same interface. Unfortunately I have need for this feature.

  I patched the source accordingly myself, creating a new config option
  'publish-local-always' to enable that behavior. It's actually I
  minimal change, and I would be pleased, if you could integrate it (or
  something similar). You can find my patch in the attachment below.

To manage notifications about this bug go to:
https://bugs.launchpad.net/avahi/+bug/1102906/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 652870] Re: Avahi update causes internet connetions to fail

2019-01-06 Thread Trent Lloyd
** Changed in: avahi (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/652870

Title:
  Avahi update causes internet connetions to fail

Status in avahi package in Ubuntu:
  Invalid

Bug description:
  I'm using Lucid with all updates applied as of a week ago. Everthing
  on my system was working fine until this mornings system update.

  Part of the update was patches to Avahi "Version 0.6.25-1ubuntu6.1".
  After reboot, I couldn't get onto the internet.

  When I looked at ifconfig, I normally have eth2 & lo entries. The
  problem seems to be a new eth2:avahi entry that had an IP address
  assigned that isn't in my LAN. The eth2 entry was also there, but
  didn't have an IP.

  I solved this, sort of, by editing /etc/default/avahi-daemon and
  setting AVAHI_DAEMON_DETECT_LOCAL=0 instead of 1. That's a patch I
  found on http://ubuntuforums.org/showthread.php?t=1339516 answering
  "How do I disable Avahi??"

  After a reboot, I could connect to the internet again.

  However, some things started behaving strangely. Like the colours of
  everything had changed as though the theme had been changed. Just
  going into the Preferences/Appearance seemed to be enough to fix it -
  I didn't have to click anything else. Very odd. Might be related to a
  different update package.

  FWIW. I don't think I need the avahi services from what I understand
  of it.

  I'm running Ubuntu Lucid within a VMware virtual machine on an XP
  host. It's a laptop, so the network environment does change twice a
  day. But, as I said, everything was stable until this mornings update.

  I've since rolled back to a VM snapshot prior to the update and I'm
  going to delay updates for a little while, but I can reinstate the
  snapshot containing the update if necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/652870/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1814105] Re: random 404s on security.ubuntu.com for libavahi packages

2019-01-31 Thread Trent Lloyd
Marking invalid as it's not technically a bug against avahi, but as
above, have arranged to have the issue looked at.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1814105

Title:
  random 404s on security.ubuntu.com for libavahi packages

Status in avahi package in Ubuntu:
  Invalid

Bug description:
  Hi, when I try to apt-get upgrade from xenial-security I get 404s from
  some of the security.ubuntu.com mirrors. Not sure if this is the place
  to report it but it seems to be related to
  https://launchpad.net/ubuntu/+source/avahi/0.6.32~rc+dfsg-
  1ubuntu2.3/+publishinghistory.

  ```
  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:19--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.161, 
91.189.88.152, 91.189.88.149, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.161|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:19 ERROR 404: Not Found.

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:20--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.161, 
91.189.88.152, 91.189.88.149, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.161|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:20 ERROR 404: Not Found.

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:20--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.152, 
91.189.91.26, 91.189.91.23, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.152|:80... 
connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 21452 (21K) [application/x-debian-package]
  Saving to: ‘libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.1’

  libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.1
  
100%[==>]
  20.95K  --.-KB/sin 0.02s

  2019-01-31 13:48:21 (1.21 MB/s) - ‘libavahi-common-data_0.6.32~rc
  +dfsg-1ubuntu2.3_amd64.deb.1’ saved [21452/21452]

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:22--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.152, 
91.189.88.149, 91.189.88.161, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.152|:80... 
connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 21452 (21K) [application/x-debian-package]
  Saving to: ‘libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.2’

  libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.2
  
100%[==>]
  20.95K  --.-KB/sin 0.03s

  2019-01-31 13:48:23 (664 KB/s) - ‘libavahi-common-data_0.6.32~rc+dfsg-
  1ubuntu2.3_amd64.deb.2’ saved [21452/21452]

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:24--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.162, 
91.189.91.23, 91.189.88.152, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.162|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:24 ERROR 404: Not Found.
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1814105/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1814105] Re: random 404s on security.ubuntu.com for libavahi packages

2019-01-31 Thread Trent Lloyd
I was able to reproduce this issue, I have forwarded it onto the
relevant team to look into.

** Changed in: avahi (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1814105

Title:
  random 404s on security.ubuntu.com for libavahi packages

Status in avahi package in Ubuntu:
  Invalid

Bug description:
  Hi, when I try to apt-get upgrade from xenial-security I get 404s from
  some of the security.ubuntu.com mirrors. Not sure if this is the place
  to report it but it seems to be related to
  https://launchpad.net/ubuntu/+source/avahi/0.6.32~rc+dfsg-
  1ubuntu2.3/+publishinghistory.

  ```
  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:19--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.161, 
91.189.88.152, 91.189.88.149, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.161|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:19 ERROR 404: Not Found.

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:20--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.161, 
91.189.88.152, 91.189.88.149, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.161|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:20 ERROR 404: Not Found.

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:20--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.152, 
91.189.91.26, 91.189.91.23, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.152|:80... 
connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 21452 (21K) [application/x-debian-package]
  Saving to: ‘libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.1’

  libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.1
  
100%[==>]
  20.95K  --.-KB/sin 0.02s

  2019-01-31 13:48:21 (1.21 MB/s) - ‘libavahi-common-data_0.6.32~rc
  +dfsg-1ubuntu2.3_amd64.deb.1’ saved [21452/21452]

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:22--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.152, 
91.189.88.149, 91.189.88.161, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.152|:80... 
connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 21452 (21K) [application/x-debian-package]
  Saving to: ‘libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.2’

  libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.2
  
100%[==>]
  20.95K  --.-KB/sin 0.03s

  2019-01-31 13:48:23 (664 KB/s) - ‘libavahi-common-data_0.6.32~rc+dfsg-
  1ubuntu2.3_amd64.deb.2’ saved [21452/21452]

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:24--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.162, 
91.189.91.23, 91.189.88.152, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.162|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:24 ERROR 404: Not Found.
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1814105/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1792978] Re: initscript avahi-daemon, action "start" failed

2019-02-04 Thread Trent Lloyd
Can you please give me more information about the environments this is
failing in?

 (1) Are these production or testing deployments
 (2) Is it being installed to a container or a VM
 (3) Can you give me the full /var/log/syslog from the example case shown in 
the description or another reproduction
 (4) Can you give me a full run down of the steps to get into this state, e.g. 
the machine was deployed with a specific Ubuntu ISO and then the full list of 
commands used to that time, etc.

** Changed in: avahi (Ubuntu)
 Assignee: (unassigned) => Trent Lloyd (lathiat)

** Changed in: avahi (Ubuntu)
   Importance: Undecided => High

** Changed in: avahi (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1792978

Title:
  initscript avahi-daemon, action "start" failed

Status in avahi package in Ubuntu:
  Incomplete

Bug description:
  When installing maas-region-controller, avahi-deamon failed to install
  because it was it seemed to already be running.

  
  $ apt-get -q install -y maas-region-controller
  
  [1invoke-rc.d: initscript avahi-daemon, action "start" failed.
  ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
 Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled; vendor 
preset: enabled)
 Active: failed (Result: exit-code) since Sat 2018-09-15 
19:42:29 UTC; 9ms ago
 Process: 22726 ExecStart=/usr/sbin/avahi-daemon -s (code=exited, 
status=255)
   Main PID: 22726 (code=exited, status=255)
   
  Sep 15 19:42:29 leafeon systemd[1]: Starting Avahi mDNS/DNS-SD Stack...
  Sep 15 19:42:29 leafeon avahi-daemon[22726]: Daemon already running on PID 
21868
  Sep 15 19:42:29 leafeon systemd[1]: avahi-daemon.service: Main 
process exit.../a
  Sep 15 19:42:29 leafeon systemd[1]: Failed to start Avahi 
mDNS/DNS-SD Stack.
  Sep 15 19:42:29 leafeon systemd[1]: avahi-daemon.service: Unit 
entered fail...e.
  Sep 15 19:42:29 leafeon systemd[1]: avahi-daemon.service: Failed 
with resul...'.
  Hint: Some lines were ellipsized, use -l to show in full.
  dpkg: error processing package avahi-daemon (--configure):
   subprocess installed post-installation script returned error exit status 1
  dpkg: dependency problems prevent configuration of avahi-utils:
   avahi-utils depends on avahi-daemon; however:
Package avahi-daemon is not configured yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1792978/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1814105] Re: random 404s on security.ubuntu.com for libavahi packages

2019-02-14 Thread Trent Lloyd
Forgot to comment: This issue was resolved by the Mirror team shortly
after

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1814105

Title:
  random 404s on security.ubuntu.com for libavahi packages

Status in avahi package in Ubuntu:
  Invalid

Bug description:
  Hi, when I try to apt-get upgrade from xenial-security I get 404s from
  some of the security.ubuntu.com mirrors. Not sure if this is the place
  to report it but it seems to be related to
  https://launchpad.net/ubuntu/+source/avahi/0.6.32~rc+dfsg-
  1ubuntu2.3/+publishinghistory.

  ```
  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:19--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.161, 
91.189.88.152, 91.189.88.149, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.161|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:19 ERROR 404: Not Found.

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:20--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.161, 
91.189.88.152, 91.189.88.149, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.161|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:20 ERROR 404: Not Found.

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:20--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.152, 
91.189.91.26, 91.189.91.23, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.152|:80... 
connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 21452 (21K) [application/x-debian-package]
  Saving to: ‘libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.1’

  libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.1
  
100%[==>]
  20.95K  --.-KB/sin 0.02s

  2019-01-31 13:48:21 (1.21 MB/s) - ‘libavahi-common-data_0.6.32~rc
  +dfsg-1ubuntu2.3_amd64.deb.1’ saved [21452/21452]

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:22--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.152, 
91.189.88.149, 91.189.88.161, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.152|:80... 
connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 21452 (21K) [application/x-debian-package]
  Saving to: ‘libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.2’

  libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb.2
  
100%[==>]
  20.95K  --.-KB/sin 0.03s

  2019-01-31 13:48:23 (664 KB/s) - ‘libavahi-common-data_0.6.32~rc+dfsg-
  1ubuntu2.3_amd64.deb.2’ saved [21452/21452]

  root@okok-blabla-magweb-cmbl ~ # wget 
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
 
  --2019-01-31 13:48:24--  
http://security.ubuntu.com/ubuntu/pool/main/a/avahi/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.3_amd64.deb
  Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.88.162, 
91.189.91.23, 91.189.88.152, ...
  Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.88.162|:80... 
connected.
  HTTP request sent, awaiting response... 404 Not Found
  2019-01-31 13:48:24 ERROR 404: Not Found.
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1814105/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-04-20 Thread Trent Lloyd
I'd still like to get the upload debdiff for 'timeout' that I prepared
uploaded.  Even if we manage to debug the bind9-host issue, it will
still be useful to have the timeout command there as a backup.  Not long
before we run out of time for bionic release.

I am actively looking at the bind9-host issue also, but I do not expect
to get that fixed before release.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1586528] Re: Avahi-daemon withdraws address record

2018-05-14 Thread Trent Lloyd
Thanks for the note ronny, that is a really helpful note. That may well
be the cause for many of these cases.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1586528

Title:
  Avahi-daemon withdraws address record

Status in avahi package in Ubuntu:
  Confirmed
Status in network-manager package in Ubuntu:
  Confirmed

Bug description:
  For some reason, if I leave my Ubuntu VM up for a prolonged period of
  time the machine will lose connection to the network.  ip addr shows
  that the nic port no longer has an address and an examination of the
  syslog shows this:

  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Withdrawing address record 
for 10.0.2.15 on enp0s3.
  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Leaving mDNS multicast 
group on interface enp0s3.IPv4 with address 10.0.2.15.
  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Interface enp0s3.IPv4 no 
longer relevant for mDNS.

  
  for no known reason.

  The only reliable way to get the network to come back (that I have
  found) is a full reboot.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: avahi-daemon 0.6.32~rc+dfsg-1ubuntu2
  ProcVersionSignature: Ubuntu 4.4.0-22.40-generic 4.4.8
  Uname: Linux 4.4.0-22-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Fri May 27 15:11:34 2016
  InstallationDate: Installed on 2015-10-22 (218 days ago)
  InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Release amd64 (20151021)
  SourcePackage: avahi
  UpgradeStatus: Upgraded to xenial on 2016-03-30 (58 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1586528/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-05-22 Thread Trent Lloyd
Sponsors: Can we get this debdiff uploaded now? We've had a few more
reports and I'd like to get this workaround in place.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in avahi package in Debian:
  New

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1586528] Re: Avahi-daemon withdraws address record

2018-05-25 Thread Trent Lloyd
** Changed in: avahi (Ubuntu)
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1586528

Title:
  Avahi-daemon withdraws address record

Status in avahi package in Ubuntu:
  Invalid
Status in network-manager package in Ubuntu:
  Confirmed

Bug description:
  For some reason, if I leave my Ubuntu VM up for a prolonged period of
  time the machine will lose connection to the network.  ip addr shows
  that the nic port no longer has an address and an examination of the
  syslog shows this:

  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Withdrawing address record 
for 10.0.2.15 on enp0s3.
  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Leaving mDNS multicast 
group on interface enp0s3.IPv4 with address 10.0.2.15.
  May 27 14:19:38 matt-VirtualBox avahi-daemon[590]: Interface enp0s3.IPv4 no 
longer relevant for mDNS.

  
  for no known reason.

  The only reliable way to get the network to come back (that I have
  found) is a full reboot.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: avahi-daemon 0.6.32~rc+dfsg-1ubuntu2
  ProcVersionSignature: Ubuntu 4.4.0-22.40-generic 4.4.8
  Uname: Linux 4.4.0-22-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Fri May 27 15:11:34 2016
  InstallationDate: Installed on 2015-10-22 (218 days ago)
  InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Release amd64 (20151021)
  SourcePackage: avahi
  UpgradeStatus: Upgraded to xenial on 2016-03-30 (58 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1586528/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-28 Thread Trent Lloyd
** Tags removed: verification-needed-artful verification-needed-trusty 
verification-needed-xenial
** Tags added: verification-done-artful verification-done-trusty 
verification-done-xenial

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  Fix Released
Status in lxd package in Ubuntu:
  Invalid
Status in avahi source package in Trusty:
  Fix Committed
Status in lxd source package in Trusty:
  Invalid
Status in avahi source package in Xenial:
  Fix Committed
Status in lxd source package in Xenial:
  Invalid
Status in avahi source package in Artful:
  Fix Committed
Status in lxd source package in Artful:
  Invalid

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.

  [Impact]

   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
   * lxc exec avahi-test sudo apt install avahi-daemon

  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)

  [Regression Potential]

   * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown
  bug then consumes significant system resources as a result of that
  limit no longer being in place, that was previously hidden by Avahi
  crashing instead.  However I believe this risk is significantly
  reduced as this change has been shipping upstream for many months and
  have not seen any reports of new problems - however it has fixed a
  number of existing crashes/problems.

   * The main case this may not fix the issue is if they have modified
  their avahi-daemon.conf file - but it will fix new installs and most
  installs as most users don't modify the file. And users may be
  prompted on upgrade to replace the file.

  [Other Info]

   * This change already exists upstream in 0.7 which is in bionic.  SRU
  required to artful, xenial, trusty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-28 Thread Trent Lloyd
Verification completed on trusty, xenial and artful

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  Fix Released
Status in lxd package in Ubuntu:
  Invalid
Status in avahi source package in Trusty:
  Fix Committed
Status in lxd source package in Trusty:
  Invalid
Status in avahi source package in Xenial:
  Fix Committed
Status in lxd source package in Xenial:
  Invalid
Status in avahi source package in Artful:
  Fix Committed
Status in lxd source package in Artful:
  Invalid

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.

  [Impact]

   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
   * lxc exec avahi-test sudo apt install avahi-daemon

  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)

  [Regression Potential]

   * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown
  bug then consumes significant system resources as a result of that
  limit no longer being in place, that was previously hidden by Avahi
  crashing instead.  However I believe this risk is significantly
  reduced as this change has been shipping upstream for many months and
  have not seen any reports of new problems - however it has fixed a
  number of existing crashes/problems.

   * The main case this may not fix the issue is if they have modified
  their avahi-daemon.conf file - but it will fix new installs and most
  installs as most users don't modify the file. And users may be
  prompted on upgrade to replace the file.

  [Other Info]

   * This change already exists upstream in 0.7 which is in bionic.  SRU
  required to artful, xenial, trusty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-04-01 Thread Trent Lloyd
Sure thing!

I conducted two tests based on the reproduction steps in the SRU
template

 * setup lxd (apt install lxd, lxd init, get working networking)
 * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
 * lxc exec avahi-test sudo apt install avahi-daemon

For xenial, artful versions I installed a container, installed the
current package and then verified that it failed to install/start as
expected.  I then removed that container, created a fresh container,
enabled -proposed and tested the install again to ensure it succeeded
with the new version.  I then further installed avahi-utils and executed
"avahi-browse -a" to ensure services from the network were appearing and
that the /etc/avahi/avahi-daemon.conf file had changed as expected based
on the patch (which was the only change, there are no code changes).

For trusty I conducted the same tests however the initial package
install does not fail under LXD due to a patch within the trusty version
of avahi that skips the nproc rlimit when inside containers for reasons
that no longer apply to modern lxd versions, however I did still ensure
the avahi-daemon.conf file was updated as expected.  The patch is still
required on trusty because a host that has containers on it, will still
have the problem with the avahi instance on the host itself that still
has the rlimit applied (even though the containers themselves don't see
the issue).

Lastly for each version I also installed the broken version and tested
that an upgrade also went as expected rather than fresh install for
completeness.


Hope that helps.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  Fix Released
Status in lxd package in Ubuntu:
  Invalid
Status in avahi source package in Trusty:
  Fix Committed
Status in lxd source package in Trusty:
  Invalid
Status in avahi source package in Xenial:
  Fix Committed
Status in lxd source package in Xenial:
  Invalid
Status in avahi source package in Artful:
  Fix Committed
Status in lxd source package in Artful:
  Invalid

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.

  [Impact]

   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
   * lxc exec avahi-test sudo apt install avahi-daemon

  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)

  [Regression Potential]

   * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown
  bug then consumes significant system resources as a result of that
  limit no longer being in place, that was previously hidden by Avahi
  crashing instead.  However I believe this risk is significantly
  reduced as this change has been shipping upstream for many months and
  have not seen any reports of new problems - however it has fixed a
  number of existing crashes/problems.

   * The main case this may not fix the issue is if they have modified
  their avahi-daemon.conf file - but it will fix new installs and most
  installs as most

[Touch-packages] [Bug 1638345] Re: avahi-daemon crashes multiple times an hour

2018-04-05 Thread Trent Lloyd
This bug was fixed in the package avahi for trusty, xenial and artful.
bionic is not affected by this issue.

xenial: 0.6.32~rc+dfsg-1ubuntu2.1
trusty: 0.6.31-4ubuntu1.2

Would be great if the various people affected by this could confirm they
no longer hit the issue.

** Changed in: avahi (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1638345

Title:
  avahi-daemon crashes multiple times an hour

Status in avahi package in Ubuntu:
  Fix Released

Bug description:
  Ever since upgrading to 16.10, avahi-daemon crashes multiple times an
  hour, leading to the prompt to report the issue to Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.10
  Package: avahi-daemon 0.6.32-1ubuntu1
  ProcVersionSignature: Ubuntu 4.8.0-26.28-generic 4.8.0
  Uname: Linux 4.8.0-26-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  ApportVersion: 2.20.3-0ubuntu8
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Tue Nov  1 10:48:35 2016
  InstallationDate: Installed on 2016-07-08 (115 days ago)
  InstallationMedia: Ubuntu 16.04 LTS "Xenial Xerus" - Release amd64 
(20160420.1)
  SourcePackage: avahi
  UpgradeStatus: Upgraded to yakkety on 2016-10-31 (0 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1638345/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1638345] Re: avahi-daemon crashes multiple times an hour

2018-04-06 Thread Trent Lloyd
@jecs Thanks for the feedback; I am curious.. how many service do you
have on your network?

If you run "avahi-browse -a -t|wc -l" -- how many lines do you have?

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1638345

Title:
  avahi-daemon crashes multiple times an hour

Status in avahi package in Ubuntu:
  Fix Released

Bug description:
  Ever since upgrading to 16.10, avahi-daemon crashes multiple times an
  hour, leading to the prompt to report the issue to Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.10
  Package: avahi-daemon 0.6.32-1ubuntu1
  ProcVersionSignature: Ubuntu 4.8.0-26.28-generic 4.8.0
  Uname: Linux 4.8.0-26-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  ApportVersion: 2.20.3-0ubuntu8
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Tue Nov  1 10:48:35 2016
  InstallationDate: Installed on 2016-07-08 (115 days ago)
  InstallationMedia: Ubuntu 16.04 LTS "Xenial Xerus" - Release amd64 
(20160420.1)
  SourcePackage: avahi
  UpgradeStatus: Upgraded to yakkety on 2016-10-31 (0 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1638345/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1563945] Re: package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to install/upgrade: subprocess installed pre-removal script returned error exit status 1

2018-04-06 Thread Trent Lloyd
This seems to be some kind of weird interaction with systemd
activation...

root@optane:/lib/systemd# systemctl stop avahi-daemon.socket
Job for avahi-daemon.socket canceled.

I think basically the issue is the service is immediately started again
due to activation.. i'm not sure why avahi-dnsconfd is stopping it
anyway though so I will check into that.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1563945

Title:
  package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to
  install/upgrade: subprocess installed pre-removal script returned
  error exit status 1

Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  interesting issues while updating packages this morning...

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2
  ProcVersionSignature: Ubuntu 4.4.0-16.32-generic 4.4.6
  Uname: Linux 4.4.0-16-generic x86_64
  NonfreeKernelModules: wl
  ApportVersion: 2.20-0ubuntu3
  Architecture: amd64
  Date: Wed Mar 30 09:33:28 2016
  DpkgTerminalLog:
   Removing avahi-discover (0.6.32~rc+dfsg-1ubuntu2) ...
   Removing avahi-dnsconfd (0.6.32~rc+dfsg-1ubuntu2) ...
   Job for avahi-daemon.socket canceled.
   dpkg: error processing package avahi-dnsconfd (--remove):
subprocess installed pre-removal script returned error exit status 1
  DuplicateSignature:
   Removing avahi-dnsconfd (0.6.32~rc+dfsg-1ubuntu2) ...
   Job for avahi-daemon.socket canceled.
   dpkg: error processing package avahi-dnsconfd (--remove):
subprocess installed pre-removal script returned error exit status 1
  ErrorMessage: subprocess installed pre-removal script returned error exit 
status 1
  InstallationDate: Installed on 2016-01-06 (83 days ago)
  InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Release amd64 (20151021)
  RelatedPackageVersions:
   dpkg 1.18.4ubuntu1
   apt  1.2.9
  SourcePackage: avahi
  Title: package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to 
install/upgrade: subprocess installed pre-removal script returned error exit 
status 1
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1563945/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: Can not ping IP addresses on remote network after connect

2018-04-08 Thread Trent Lloyd
No VPN in use.. this is probably a bug equally in bind9-host and avahi-
daemon

The host shouldn't be getting stuck and avahi should probably make the
script timeout somehow


** Also affects: bind9 (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: bind9 (Ubuntu)
   Importance: Undecided => Critical

** Also affects: avahi (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: avahi (Ubuntu)
   Importance: Undecided => High

** Changed in: bind9 (Ubuntu)
   Importance: Critical => High

** Changed in: bind9 (Ubuntu)
   Status: New => Confirmed

** Changed in: avahi (Ubuntu)
   Status: New => Confirmed

** Summary changed:

- Can not ping IP addresses on remote network after connect
+ bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections 
to get stuck

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Confirmed

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1762320] [NEW] gcore does not execute, uses /bin/sh but requires bash

2018-04-09 Thread Trent Lloyd
Public bug reported:

gcore fails to execute on bionic

$ gcore
/usr/bin/gcore: 28: /usr/bin/gcore: Syntax error: "(" unexpected

Line 28 is:
 28 dump_all_cmds=()


This appears to be bash syntax for arrays (as reinforced further down)
which is not compatible with the /bin/sh shebang using dash.

Upstream has a recent commit to move this from /bin/sh to bash (along also 
making other changes to fix problems with quoting within the same commit):
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=e1e6f073a9f5d7c3183cb8096fb24a42c28ba36b

As a minimum fix we should probably change it from /bin/sh to /bin/bash

** Affects: gdb (Ubuntu)
 Importance: Medium
 Status: Confirmed

** Changed in: gdb (Ubuntu)
   Status: New => Confirmed

** Changed in: gdb (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to gdb in Ubuntu.
https://bugs.launchpad.net/bugs/1762320

Title:
  gcore does not execute, uses /bin/sh but requires bash

Status in gdb package in Ubuntu:
  Confirmed

Bug description:
  gcore fails to execute on bionic

  $ gcore
  /usr/bin/gcore: 28: /usr/bin/gcore: Syntax error: "(" unexpected

  Line 28 is:
   28 dump_all_cmds=()  
  

  This appears to be bash syntax for arrays (as reinforced further down)
  which is not compatible with the /bin/sh shebang using dash.

  Upstream has a recent commit to move this from /bin/sh to bash (along also 
making other changes to fix problems with quoting within the same commit):
  
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=e1e6f073a9f5d7c3183cb8096fb24a42c28ba36b

  As a minimum fix we should probably change it from /bin/sh to
  /bin/bash

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdb/+bug/1762320/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1762320] Re: gcore does not execute, uses /bin/sh but requires bash

2018-04-09 Thread Trent Lloyd
** Patch added: "gdb-bionic-gcore-bash.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/gdb/+bug/1762320/+attachment/5107574/+files/gdb-bionic-gcore-bash.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to gdb in Ubuntu.
https://bugs.launchpad.net/bugs/1762320

Title:
  gcore does not execute, uses /bin/sh but requires bash

Status in gdb package in Ubuntu:
  Confirmed

Bug description:
  gcore fails to execute on bionic

  $ gcore
  /usr/bin/gcore: 28: /usr/bin/gcore: Syntax error: "(" unexpected

  Line 28 is:
   28 dump_all_cmds=()  
  

  This appears to be bash syntax for arrays (as reinforced further down)
  which is not compatible with the /bin/sh shebang using dash.

  Upstream has a recent commit to move this from /bin/sh to bash (along also 
making other changes to fix problems with quoting within the same commit):
  
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=e1e6f073a9f5d7c3183cb8096fb24a42c28ba36b

  As a minimum fix we should probably change it from /bin/sh to
  /bin/bash

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gdb/+bug/1762320/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1760128] Re: package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to install/upgrade: subprocess new pre-removal script returned error exit status 1

2018-04-09 Thread Trent Lloyd
Analysis here appears related:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=768620

Seems the interactions here are quite complex

To reproduce the problem, you can do on xenial:
(install old version)
# apt install avahi-daemon=0.6.32~rc+dfsg-1ubuntu2 
avahi-dnsconfd=0.6.32~rc+dfsg-1ubuntu2 avahi-utils=0.6.32~rc+dfsg-1ubuntu2

(in a second terminal, leaves this running)
while true; do avahi-browse -a; done

(back in the first terminal)
apt upgrade # will upgrade from the xenial version 1ubuntu2 to xenial-updates 
version 1ubuntu2.1


Looks like the --restart-after-upgrade change may be what we need except I 
can't see where that's actually used in the current Ubuntu or Debian package.  
Seems this shipped patch may have "fixed" the issue instead.  This patch is in 
bionic, but not in xenial where are are seeing this issue reported.

lathiat@optane:~/src/debian/avahi-0.7$ cat debian/patches/no-systemd-also.patch 
Description: Don't use 'Also=' in dnsconfd systemd unit
 'Also=avahi-daemon.socket' means that 'systemctl disable avahi-dnsconfd'
 will also disable avahi-daemon.socket, which is definitely not what we
 want, and it also causes debhelper to throw an error.  Just drop this entry
 from the configuration.
Author: Steve Langasek 
Last-Modified: 2018-01-02 20:30:00 -0800
Bug-Debian: https://bugs.debian.org/878911

Index: avahi-0.7-3ubuntu1/avahi-dnsconfd/avahi-dnsconfd.service.in
===
--- avahi-0.7-3ubuntu1.orig/avahi-dnsconfd/avahi-dnsconfd.service.in
+++ avahi-0.7-3ubuntu1/avahi-dnsconfd/avahi-dnsconfd.service.in
@@ -26,4 +26,3 @@
 
 [Install]
 WantedBy=multi-user.target
-Also=avahi-daemon.socket



** Bug watch added: Debian Bug tracker #768620
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=768620

** Bug watch added: Debian Bug tracker #878911
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=878911

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1760128

Title:
  package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to
  install/upgrade: subprocess new pre-removal script returned error exit
  status 1

Status in avahi package in Ubuntu:
  Triaged

Bug description:
  Ubuntu Update 30.03.18

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2
  ProcVersionSignature: Ubuntu 4.4.0-116.140-generic 4.4.98
  Uname: Linux 4.4.0-116-generic x86_64
  NonfreeKernelModules: nvidia_uvm nvidia
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 16:32:50 2018
  ErrorMessage: subprocess new pre-removal script returned error exit status 1
  InstallationDate: Installed on 2016-01-31 (788 days ago)
  InstallationMedia: Ubuntu-MATE 16.04 LTS "Xenial Xerus" - Alpha amd64 
(20160131)
  RelatedPackageVersions:
   dpkg 1.18.4ubuntu1.4
   apt  1.2.26
  SourcePackage: avahi
  Title: package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to 
install/upgrade: subprocess new pre-removal script returned error exit status 1
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1760128/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1760128] Re: package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to install/upgrade: subprocess new pre-removal script returned error exit status 1

2018-04-09 Thread Trent Lloyd
If you are hit by this issue, it seems sufficient to simply ask dpkg to
finish configuration as it seems the prerm script isn't retried

$ dpkg --configure -a

Or you can just re-run your upgrade command (e.g. "apt upgrade") which
should do the same plus finish any upgrades that didn't finish (which
may be the case) - you'll probably want to do that either way.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1760128

Title:
  package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to
  install/upgrade: subprocess new pre-removal script returned error exit
  status 1

Status in avahi package in Ubuntu:
  Triaged

Bug description:
  Ubuntu Update 30.03.18

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2
  ProcVersionSignature: Ubuntu 4.4.0-116.140-generic 4.4.98
  Uname: Linux 4.4.0-116-generic x86_64
  NonfreeKernelModules: nvidia_uvm nvidia
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 16:32:50 2018
  ErrorMessage: subprocess new pre-removal script returned error exit status 1
  InstallationDate: Installed on 2016-01-31 (788 days ago)
  InstallationMedia: Ubuntu-MATE 16.04 LTS "Xenial Xerus" - Alpha amd64 
(20160131)
  RelatedPackageVersions:
   dpkg 1.18.4ubuntu1.4
   apt  1.2.26
  SourcePackage: avahi
  Title: package avahi-dnsconfd 0.6.32~rc+dfsg-1ubuntu2 failed to 
install/upgrade: subprocess new pre-removal script returned error exit status 1
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1760128/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-04-09 Thread Trent Lloyd
I did some testing using strace and looking at backtraces of why "host"
is stuck, and it's not immediately clear to me why it's getting stuck.
Will need to look more in depth into it tracing it's actual execution -
it's multi threaded and using poll so not super straight forward from
the trace for someone unfamiliar with the code-base.

I did test that when it happens, the network interfaces are up and
systemd-resolved is started - and I can see a sendmsg/recvmsg appear to
succeed to the systemd stub resolver and my local SNS server.  I also
tried explicitly setting the timeout with host -W 5 (this should be the
default, but wanted to test as there is a -w indefinite option).
However the 'host' command always works when I log into the system while
the other commands are still stuck in the background - so something
strange is going on.

What does work, is executing 'host' under /usr/bin/timeout.  Given the
severity of this issue (makes startup hang without SSH for several
minutes, and blocks everything else from starting up seemingly forever),
I would suggest that we should ship a fix for bionic to use timeout to
work around the issue for now.

/usr/lib/avahi/avahi-daemon-check-dns.sh : dns_has_local()
  OUT=`LC_ALL=C /usr/bin/timeout 5 host -t soa local. 2>&1`


** Changed in: openconnect (Ubuntu)
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1761763] Re: package avahi-daemon 0.6.32-1ubuntu1.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1

2018-04-11 Thread Trent Lloyd
Thanks for the report.

 How did this happen, was it during a package upgrade for a normal
installation or is this a new install, etc.  If a new install, describe
which install media download and options you used. Or were you
installing avahi-daemon or some other package using apt install, etc.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1761763

Title:
  package avahi-daemon 0.6.32-1ubuntu1.1 failed to install/upgrade:
  subprocess installed post-installation script returned error exit
  status 1

Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  This is using 17.10 so I believe it is different than the 1633217 bug
  report

  ProblemType: Package
  DistroRelease: Ubuntu 17.10
  Package: avahi-daemon 0.6.32-1ubuntu1.1
  ProcVersionSignature: Ubuntu 4.13.0-38.43-generic 4.13.16
  Uname: Linux 4.13.0-38-generic x86_64
  ApportVersion: 2.20.7-0ubuntu3.7
  Architecture: amd64
  Date: Fri Apr  6 08:18:27 2018
  DpkgHistoryLog:
   Start-Date: 2018-04-06  08:18:02
   Upgrade: libvncclient1:amd64 (0.9.11+dfsg-1, 0.9.11+dfsg-1ubuntu0.1), 
libraw16:amd64 (0.18.2-2ubuntu0.1, 0.18.2-2ubuntu0.2), libavahi-glib1:amd64 
(0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), libavahi-common-data:amd64 
(0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), libavahi-common3:amd64 (0.6.32-1ubuntu1, 
0.6.32-1ubuntu1.1), libavahi-ui-gtk3-0:amd64 (0.6.32-1ubuntu1, 
0.6.32-1ubuntu1.1), avahi-daemon:amd64 (0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), 
libavahi-core7:amd64 (0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), avahi-autoipd:amd64 
(0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), python3-crypto:amd64 (2.6.1-7build2, 
2.6.1-7ubuntu0.1), avahi-utils:amd64 (0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), 
libavahi-client3:amd64 (0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), hdparm:amd64 
(9.51+ds-1, 9.51+ds-1ubuntu0.1)
  ErrorMessage: subprocess installed post-installation script returned error 
exit status 1
  InstallationDate: Installed on 2018-01-07 (89 days ago)
  InstallationMedia: Ubuntu 17.10 "Artful Aardvark" - Release amd64 (20171018)
  Python3Details: /usr/bin/python3.6, Python 3.6.3, python3-minimal, 
3.6.3-0ubuntu2
  PythonDetails: /usr/bin/python2.7, Python 2.7.14, python-minimal, 
2.7.14-2ubuntu1
  RelatedPackageVersions:
   dpkg 1.18.24ubuntu1
   apt  1.5.1
  SourcePackage: avahi
  Title: package avahi-daemon 0.6.32-1ubuntu1.1 failed to install/upgrade: 
subprocess installed post-installation script returned error exit status 1
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1761763/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1761763] Re: package avahi-daemon 0.6.32-1ubuntu1.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1

2018-04-11 Thread Trent Lloyd
Looking at your logs, generally it seems like dbus is broken for some
reason.

Would also be great to check it's status:
# systemctl status dbus

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1761763

Title:
  package avahi-daemon 0.6.32-1ubuntu1.1 failed to install/upgrade:
  subprocess installed post-installation script returned error exit
  status 1

Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  This is using 17.10 so I believe it is different than the 1633217 bug
  report

  ProblemType: Package
  DistroRelease: Ubuntu 17.10
  Package: avahi-daemon 0.6.32-1ubuntu1.1
  ProcVersionSignature: Ubuntu 4.13.0-38.43-generic 4.13.16
  Uname: Linux 4.13.0-38-generic x86_64
  ApportVersion: 2.20.7-0ubuntu3.7
  Architecture: amd64
  Date: Fri Apr  6 08:18:27 2018
  DpkgHistoryLog:
   Start-Date: 2018-04-06  08:18:02
   Upgrade: libvncclient1:amd64 (0.9.11+dfsg-1, 0.9.11+dfsg-1ubuntu0.1), 
libraw16:amd64 (0.18.2-2ubuntu0.1, 0.18.2-2ubuntu0.2), libavahi-glib1:amd64 
(0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), libavahi-common-data:amd64 
(0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), libavahi-common3:amd64 (0.6.32-1ubuntu1, 
0.6.32-1ubuntu1.1), libavahi-ui-gtk3-0:amd64 (0.6.32-1ubuntu1, 
0.6.32-1ubuntu1.1), avahi-daemon:amd64 (0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), 
libavahi-core7:amd64 (0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), avahi-autoipd:amd64 
(0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), python3-crypto:amd64 (2.6.1-7build2, 
2.6.1-7ubuntu0.1), avahi-utils:amd64 (0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), 
libavahi-client3:amd64 (0.6.32-1ubuntu1, 0.6.32-1ubuntu1.1), hdparm:amd64 
(9.51+ds-1, 9.51+ds-1ubuntu0.1)
  ErrorMessage: subprocess installed post-installation script returned error 
exit status 1
  InstallationDate: Installed on 2018-01-07 (89 days ago)
  InstallationMedia: Ubuntu 17.10 "Artful Aardvark" - Release amd64 (20171018)
  Python3Details: /usr/bin/python3.6, Python 3.6.3, python3-minimal, 
3.6.3-0ubuntu2
  PythonDetails: /usr/bin/python2.7, Python 2.7.14, python-minimal, 
2.7.14-2ubuntu1
  RelatedPackageVersions:
   dpkg 1.18.24ubuntu1
   apt  1.5.1
  SourcePackage: avahi
  Title: package avahi-daemon 0.6.32-1ubuntu1.1 failed to install/upgrade: 
subprocess installed post-installation script returned error exit status 1
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1761763/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 831022] Re: avahi daemon erroneously assumes host name conflicts (and causes more trouble then)

2018-04-13 Thread Trent Lloyd
FYI this will not effect DHCP as the hostname change is only used for
mDNS and is not used as the system hostname and thus would affect DHCP
etc

The cause of this is knwon and hopefully a fix will get done for it
soon.. basically its when IP addresses are added then removed too fast.
Can also happen with IPv6 because if a global address comes up the link
local address is dropped.  The code doesn't then handle teh previous
announcement coming back into the avahi process and thiunks its a
conflict.  It's basically a race condition.

Correct upstream issue is https://github.com/lathiat/avahi/issues/117

** Bug watch added: github.com/lathiat/avahi/issues #117
   https://github.com/lathiat/avahi/issues/117

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/831022

Title:
  avahi daemon erroneously assumes host name conflicts (and causes more
  trouble then)

Status in Avahi:
  New
Status in avahi package in Ubuntu:
  Triaged

Bug description:
  Hi,

  i am running the server edition of Ubuntu LTS on a HP Proliant server.
  That avahi daemon is driving me crazy.

  The server is named Server1 , retrieves it's IP address
  pseudostatically from a DHCP server, and announces several of its
  services through avahi (e.g. offering print services to iPads, that's
  why Avahi is needed).

  every now and then (not always) avahi daemon claims to have detected a
  name collision at boot time. It then logs

  Aug 22 13:09:40 Server1 avahi-daemon[1520]: Host name conflict,
  retrying with 

  and uses Server1-2 as it's name, causing even the DHCP-Server to apply
  with a different host name, thus causing a new IP address to be
  assigned. This avahi daemon is breaking my network structure and does
  not even tell in the logs, where the conflict came from or what made
  the daemon believe that there was a conflict.

  ProblemType: Bug
  DistroRelease: Ubuntu 10.04
  Package: avahi-daemon 0.6.25-1ubuntu6.2
  ProcVersionSignature: Ubuntu 2.6.32-33.72-server 2.6.32.41+drm33.18
  Uname: Linux 2.6.32-33-server x86_64
  Architecture: amd64
  Date: Mon Aug 22 13:24:06 2011
  InstallationMedia: Ubuntu-Server 10.04.1 LTS "Lucid Lynx" - Release amd64 
(20100816.2)
  ProcEnviron:
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/tcsh
  SourcePackage: avahi

To manage notifications about this bug go to:
https://bugs.launchpad.net/avahi/+bug/831022/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-04-15 Thread Trent Lloyd
With host -d I simply get

> Trying "local"

When it works normally I get;

Trying "local"
Host local. not found: 3(NXDOMAIN)
Received 98 bytes from 10.48.134.6#53 in 1 ms
Received 98 bytes from 10.48.134.6#53 in 1 ms

The system I am hitting this issue on is an upgraded system (rather than
a fresh install which wouldn't use ifupdown)

Because this is a serious issue for bionic upgraders I am attaching a
debdiff to use 'timeout' to fix the issue for now because release is
imminent.  Core issue with 'host' probably still needs to be
investigated (as this may add 5s delays to boot-up) however the timeout
is probably a good backup anyway.  In some ways potentially the entire
check-dns script should probably be launched under timeout.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-04-15 Thread Trent Lloyd
** Patch added: "lp1752411-avahi-host-timeout.diff"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+attachment/5117372/+files/lp1752411-avahi-host-timeout.diff

** Changed in: avahi (Ubuntu)
 Assignee: (unassigned) => Trent Lloyd (lathiat)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1763758] Re: package libavahi-glib1:amd64 0.6.32~rc+dfsg-1ubuntu2.1 failed to install/upgrade: package is in a very bad inconsistent state; you should reinstall it before attemp

2018-04-16 Thread Trent Lloyd
Not sure how this happened,, strange dpkg issue it seems like FS
corruption or shutdown during a package install.  Unusual to see that.

To get your system back on track, I suggest you try some combination of
these commands (You might need to run each a couple of times depending)

sudo dpkg --configure -a
sudo apt-get -f install
sudo apt install libavahi-glib1 --reinstall


** Changed in: avahi (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1763758

Title:
  package libavahi-glib1:amd64 0.6.32~rc+dfsg-1ubuntu2.1 failed to
  install/upgrade: package is in a very bad inconsistent state; you
  should  reinstall it before attempting configuration

Status in avahi package in Ubuntu:
  Invalid

Bug description:
  "System program problem detected" message appeared.

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: libavahi-glib1:amd64 0.6.32~rc+dfsg-1ubuntu2.1
  ProcVersionSignature: Ubuntu 4.13.0-38.43~16.04.1-generic 4.13.16
  Uname: Linux 4.13.0-38-generic x86_64
  NonfreeKernelModules: wl
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Apr 13 17:28:41 2018
  DuplicateSignature:
   package:libavahi-glib1:amd64:0.6.32~rc+dfsg-1ubuntu2.1
   Processing triggers for libc-bin (2.23-0ubuntu10) ...
   dpkg: error processing package libavahi-glib1:amd64 (--configure):
package is in a very bad inconsistent state; you should
  ErrorMessage: package is in a very bad inconsistent state; you should  
reinstall it before attempting configuration
  InstallationDate: Installed on 2018-01-19 (84 days ago)
  InstallationMedia: Ubuntu 16.04.3 LTS "Xenial Xerus" - Release amd64 
(20170801)
  RelatedPackageVersions:
   dpkg 1.18.4ubuntu1.4
   apt  1.2.26
  SourcePackage: avahi
  Title: package libavahi-glib1:amd64 0.6.32~rc+dfsg-1ubuntu2.1 failed to 
install/upgrade: package is in a very bad inconsistent state; you should  
reinstall it before attempting configuration
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1763758/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-04-17 Thread Trent Lloyd
There is a new bind9 upload to bionic-proposed (9.11.3+dfsg-1ubuntu1)

Tested with this version and 'host' is still hanging.  So this fix is
still required.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


Re: [Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-04 Thread Trent Lloyd
I’ll follow up on this tomorrow  and see what I need to get it pushed
through.

On Sun, 4 Mar 2018 at 9:50 pm, TJ  wrote:

> In IRC support we've been getting reports about this issue for 17.10;
> Can we get the SRU pushed out?
>
> --
> You received this bug notification because you are a member of Avahi,
> which is subscribed to avahi in Ubuntu.
> https://bugs.launchpad.net/bugs/1661869
>
> Title:
>   maas install fails inside of a 16.04 lxd container due to avahi
>   problems
>
> Status in MAAS:
>   Invalid
> Status in avahi package in Ubuntu:
>   In Progress
> Status in lxd package in Ubuntu:
>   Invalid
>
> Bug description:
>   The bug, and workaround, are clearly described in this mailing list
>   thread:
>
>   https://lists.linuxcontainers.org/pipermail/lxc-
>   users/2016-January/010791.html
>
>   I'm trying to install MAAS in a LXD container, but that's failing due
>   to avahi package install problems.  I'm tagging all packages here.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions
>
> Launchpad-Notification-Type: bug
> Launchpad-Bug: product=maas; status=Invalid; importance=Undecided;
> assignee=None;
> Launchpad-Bug: distribution=ubuntu; sourcepackage=avahi; component=main;
> status=In Progress; importance=High; assignee=trent.ll...@canonical.com;
> Launchpad-Bug: distribution=ubuntu; sourcepackage=lxd; component=main;
> status=Invalid; importance=Undecided; assignee=None;
> Launchpad-Bug-Tags: maas-at-home patch
> Launchpad-Bug-Information-Type: Public
> Launchpad-Bug-Private: no
> Launchpad-Bug-Security-Vulnerability: no
> Launchpad-Bug-Commenters: 1chb1n crichton kirkland lathiat mpontillo
> stgraber tj
> Launchpad-Bug-Reporter: Dustin Kirkland  (kirkland)
> Launchpad-Bug-Modifier: TJ (tj)
> Launchpad-Message-Rationale: Subscriber (avahi in Ubuntu) @avahi
> Launchpad-Message-For: avahi
>

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  In Progress
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  The bug, and workaround, are clearly described in this mailing list
  thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1754244] [NEW] [bionic] evolution-source-registry always using 100% cpu

2018-03-07 Thread Trent Lloyd
Public bug reported:

evolution-source-registry is using 100% CPU after login and never stops,
it continues to do so after kill/respawn or reboot.  This started
sometime after upgrade to bionic and has been consistent for the last
couple of weeks.

Looking at the process with perf, it seems that it is looping calling
poll() via g_main_context_poll() via g_main_context_iterate in a hard
loop.  In strace you can also see poll being called about 10,000
times/second.

poll([{fd=3, events=POLLIN}, {fd=11, events=POLLIN}], 2, 0) = 0
(Timeout)

gdb backtrace for the main thread reflected in the attached out.svg -
setting a breakpoint g_main_context_iterate is being called continually
but we never exit from g_main_loop_run.

(gdb) bt
#0  0x7f6e1ce30150 in g_main_context_iterate (context=0x55ca494d18c0, 
block=block@entry=1, dispatch=dispatch@entry=1, self=)
at ../../../../glib/gmain.c:3840
#1  0x7f6e1ce30662 in g_main_loop_run (loop=0x55ca4949f580) at 
../../../../glib/gmain.c:4099
#2  0x7f6e1df22710 in dbus_server_run_server (server=0x55ca494df1a0 
[ESourceRegistryServer]) at ./src/libebackend/e-dbus-server.c:247
#3  0x7f6e19502dae in ffi_call_unix64 () at 
/usr/lib/x86_64-linux-gnu/libffi.so.6
#4  0x7f6e1950271f in ffi_call () at /usr/lib/x86_64-linux-gnu/libffi.so.6
#5  0x7f6e1d10ab4d in g_cclosure_marshal_generic_va 
(closure=0x55ca494d8ab0, return_value=0x7ffd34330e10, instance=, 
args_list=, marshal_data=, n_params=0, 
param_types=0x0) at ../../../../gobject/gclosure.c:1604
#6  0x7f6e1d10a1a6 in _g_closure_invoke_va (closure=0x55ca494d8ab0, 
return_value=0x7ffd34330e10, instance=0x55ca494df1a0, args=0x7ffd34330ec0, 
n_params=0, param_types=0x0) at ../../../../gobject/gclosure.c:867
#7  0x7f6e1d1256df in g_signal_emit_valist (instance=0x55ca494df1a0, 
signal_id=, detail=, 
var_args=var_args@entry=0x7ffd34330ec0)
at ../../../../gobject/gsignal.c:3300
#8  0x7f6e1d125e0f in g_signal_emit 
(instance=instance@entry=0x55ca494df1a0, signal_id=, 
detail=detail@entry=0) at ../../../../gobject/gsignal.c:3447
#9  0x7f6e1df22a1d in e_dbus_server_run (server=0x55ca494df1a0 
[ESourceRegistryServer], wait_for_client=0) at 
./src/libebackend/e-dbus-server.c:441
#10 0x55ca473cec0c in main (argc=, argv=) at 
./src/services/evolution-source-registry/evolution-source-registry.c:233


ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: evolution-data-server 3.27.90-1ubuntu1
ProcVersionSignature: Ubuntu 4.15.0-11.12-generic 4.15.5
Uname: Linux 4.15.0-11-generic x86_64
NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
ApportVersion: 2.20.8-0ubuntu10
Architecture: amd64
CurrentDesktop: GNOME
Date: Thu Mar  8 13:28:39 2018
EcryptfsInUse: Yes
ExecutablePath: /usr/lib/evolution/evolution-source-registry
ProcEnviron:
 LANG=en_AU.UTF-8
 LANGUAGE=en_AU:en
 PATH=(custom, user)
 SHELL=/bin/bash
 XDG_RUNTIME_DIR=
SourcePackage: evolution-data-server
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: evolution-data-server (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug bionic third-party-packages

** Description changed:

  evolution-source-registry is using 100% CPU after login and never stops,
  it continues to do so after kill/respawn or reboot.  This started
  sometime after upgrade to bionic and has been consistent for the last
  couple of weeks.
  
  Looking at the process with perf, it seems that it is looping calling
  poll() via g_main_context_poll() via g_main_context_iterate in a hard
  loop.  In strace you can also see poll being called about 10,000
  times/second.
  
  poll([{fd=3, events=POLLIN}, {fd=11, events=POLLIN}], 2, 0) = 0
  (Timeout)
  
  gdb backtrace for the main thread reflected in the attached out.svg -
  setting a breakpoint g_main_context_iterate is being called continually
  but we never exit from g_main_loop_run.
  
+ (gdb) bt
+ #0  0x7f6e1ce30150 in g_main_context_iterate (context=0x55ca494d18c0, 
block=block@entry=1, dispatch=dispatch@entry=1, self=)
+ at ../../../../glib/gmain.c:3840
+ #1  0x7f6e1ce30662 in g_main_loop_run (loop=0x55ca4949f580) at 
../../../../glib/gmain.c:4099
+ #2  0x7f6e1df22710 in dbus_server_run_server (server=0x55ca494df1a0 
[ESourceRegistryServer]) at ./src/libebackend/e-dbus-server.c:247
+ #3  0x7f6e19502dae in ffi_call_unix64 () at 
/usr/lib/x86_64-linux-gnu/libffi.so.6
+ #4  0x7f6e1950271f in ffi_call () at /usr/lib/x86_64-linux-gnu/libffi.so.6
+ #5  0x7f6e1d10ab4d in g_cclosure_marshal_generic_va 
(closure=0x55ca494d8ab0, return_value=0x7ffd34330e10, instance=, 
args_list=, marshal_data=, n_params=0, 
param_types=0x0) at ../../../../gobject/gclosure.c:1604
+ #6  0x7f6e1d10a1a6 in _g_closure_invoke_va (closure=0x55ca494d8ab0, 
return_value=0x7ffd34330e10, instance=0x55ca494df1a0, args=0x7ffd34330ec0, 
n_params=0, param_types=0x0) at ../../../../gobject/gclosure.c:867
+ #7  0x7f6e1d1256df in g_signal_e

[Touch-packages] [Bug 1754270] [NEW] systemd-user PAM configuration should initialize the keyring with pam_keyinit

2018-03-08 Thread Trent Lloyd
Public bug reported:

/etc/pam.d/systemd-user does not currently call pam_keyinit.so -- it's
possible this should instead be added to common-session-noninteractive
but I am not entirely sure about that - someone with more understanding
of the PAM modules would probably need to weigh in on that.  In any case
systemd-user itself should at least have it - as it has it's own special
PAMname for processes it launches.

This means that the keyring does not link to the user keyring as it
should, and will cause issues with programs needing a key from the
keyring.  In particular the use case that breaks for me is using
'fscrypt' and 'libpam-fscrypt' however anything making use of kernel
keyrings would be affected.

Something non-obvious about this, is that many desktop session processes
are started under 'systemd-user' instead of the 'session' - this
includes gnome-terminal-server which means any gnome-terminal shell runs
under this context.  If you start xterm instead of gnome-terminal, you
get a different keyring and this can cause confusion when debugging the
issue as some processes are in one state and the others are in another
including your primary debug tool gnome-terminal.  You can verify this
by running 'systemctl status $(pidof gnome-terminal)' and 'systemctl
status $(pidof xterm)' and note the different hierachy.

The change to add pam_keyinit.so was made upstream in December 2016:
https://github.com/systemd/systemd/commit/ab79099d1684457d040ee7c28b2012e8c1ea9a4f

Ubuntu should make the same change so that services needing a keyring
will work correctly in the desktop session, and the same keyring is used
for processes launched under both methods.  In my test I add the usual
pam_keyinit.so line after "pam_limits.so" and before "common-session-
noninteractive".  I am not sure if this is the most ideal location (but
it appears to work).


You can test the behavior by running "keyctl show @s" in both contexts

Working contexts:
- xterm
- SSH login

Broken contexts:
- gnome-terminal
- systemd-run --user keyctl show @s (then check output with journalctl --user 
--follow)


When the configuration is broken you will see this output:
lathiat@ubuntu:~/src/systemd$ keyctl show @s
Keyring
  59654779 --alswrv   1000  1000  keyring: _ses
   6806191 s-rv  0 0   \_ user: invocation_id

When the configuration is working, you will see a link to the user session 
instead:
lathiat@ubuntu:~/src/systemd$ keyctl show @s
Keyring
  59654779 --alswrv   1000  1000  keyring: _ses
   6806191 s-rv  0 0   \_ keyring: _uid.1000


As background, what is broken on my test setup with libpam-fscrypt?
gnome-terminal for example is unable to write any file in my encrypted /home 
which means that it cannot save preferences, so if you go into preferences and 
try to tick a checkbox it will immediately revert and an error is logged to the 
journal.  You can use the guide at https://github.com/google/fscrypt to setup 
such a system if you wish to fully test my case.  But you can simply verify the 
behavior as above.

Verified on bionic (its the only version with fscrypt) however this
issue extends back to at least xenial.

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1754270

Title:
  systemd-user PAM configuration should initialize the keyring with
  pam_keyinit

Status in systemd package in Ubuntu:
  New

Bug description:
  /etc/pam.d/systemd-user does not currently call pam_keyinit.so -- it's
  possible this should instead be added to common-session-noninteractive
  but I am not entirely sure about that - someone with more
  understanding of the PAM modules would probably need to weigh in on
  that.  In any case systemd-user itself should at least have it - as it
  has it's own special PAMname for processes it launches.

  This means that the keyring does not link to the user keyring as it
  should, and will cause issues with programs needing a key from the
  keyring.  In particular the use case that breaks for me is using
  'fscrypt' and 'libpam-fscrypt' however anything making use of kernel
  keyrings would be affected.

  Something non-obvious about this, is that many desktop session
  processes are started under 'systemd-user' instead of the 'session' -
  this includes gnome-terminal-server which means any gnome-terminal
  shell runs under this context.  If you start xterm instead of gnome-
  terminal, you get a different keyring and this can cause confusion
  when debugging the issue as some processes are in one state and the
  others are in another including your primary debug tool gnome-
  terminal.  You can verify this by running 'systemctl status $(pidof
  gnome-terminal)' and 'systemctl status $(pidof xterm)' and note the
  different hierachy.

  The change to add pam_keyinit.so was made upstream in December 

[Touch-packages] [Bug 1754270] Re: systemd-user PAM configuration should initialize the keyring with pam_keyinit

2018-03-08 Thread Trent Lloyd
Found a Debian bug about the same issue but with AFS instead of fscrypt:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=846377

** Bug watch added: Debian Bug tracker #846377
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=846377

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1754270

Title:
  systemd-user PAM configuration should initialize the keyring with
  pam_keyinit

Status in systemd package in Ubuntu:
  New

Bug description:
  /etc/pam.d/systemd-user does not currently call pam_keyinit.so -- it's
  possible this should instead be added to common-session-noninteractive
  but I am not entirely sure about that - someone with more
  understanding of the PAM modules would probably need to weigh in on
  that.  In any case systemd-user itself should at least have it - as it
  has it's own special PAMname for processes it launches.

  This means that the keyring does not link to the user keyring as it
  should, and will cause issues with programs needing a key from the
  keyring.  In particular the use case that breaks for me is using
  'fscrypt' and 'libpam-fscrypt' however anything making use of kernel
  keyrings would be affected.

  Something non-obvious about this, is that many desktop session
  processes are started under 'systemd-user' instead of the 'session' -
  this includes gnome-terminal-server which means any gnome-terminal
  shell runs under this context.  If you start xterm instead of gnome-
  terminal, you get a different keyring and this can cause confusion
  when debugging the issue as some processes are in one state and the
  others are in another including your primary debug tool gnome-
  terminal.  You can verify this by running 'systemctl status $(pidof
  gnome-terminal)' and 'systemctl status $(pidof xterm)' and note the
  different hierachy.

  The change to add pam_keyinit.so was made upstream in December 2016:
  
https://github.com/systemd/systemd/commit/ab79099d1684457d040ee7c28b2012e8c1ea9a4f

  Ubuntu should make the same change so that services needing a keyring
  will work correctly in the desktop session, and the same keyring is
  used for processes launched under both methods.  In my test I add the
  usual pam_keyinit.so line after "pam_limits.so" and before "common-
  session-noninteractive".  I am not sure if this is the most ideal
  location (but it appears to work).

  
  You can test the behavior by running "keyctl show @s" in both contexts

  Working contexts:
  - xterm
  - SSH login

  Broken contexts:
  - gnome-terminal
  - systemd-run --user keyctl show @s (then check output with journalctl --user 
--follow)

  
  When the configuration is broken you will see this output:
  lathiat@ubuntu:~/src/systemd$ keyctl show @s
  Keyring
59654779 --alswrv   1000  1000  keyring: _ses
 6806191 s-rv  0 0   \_ user: invocation_id

  When the configuration is working, you will see a link to the user session 
instead:
  lathiat@ubuntu:~/src/systemd$ keyctl show @s
  Keyring
59654779 --alswrv   1000  1000  keyring: _ses
 6806191 s-rv  0 0   \_ keyring: _uid.1000

  
  As background, what is broken on my test setup with libpam-fscrypt?
  gnome-terminal for example is unable to write any file in my encrypted /home 
which means that it cannot save preferences, so if you go into preferences and 
try to tick a checkbox it will immediately revert and an error is logged to the 
journal.  You can use the guide at https://github.com/google/fscrypt to setup 
such a system if you wish to fully test my case.  But you can simply verify the 
behavior as above.

  Verified on bionic (its the only version with fscrypt) however this
  issue extends back to at least xenial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1754270/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1638345] Re: avahi-daemon crashes multiple times an hour

2018-03-14 Thread Trent Lloyd
** Changed in: avahi (Ubuntu)
 Assignee: (unassigned) => Trent Lloyd (lathiat)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1638345

Title:
  avahi-daemon crashes multiple times an hour

Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  Ever since upgrading to 16.10, avahi-daemon crashes multiple times an
  hour, leading to the prompt to report the issue to Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.10
  Package: avahi-daemon 0.6.32-1ubuntu1
  ProcVersionSignature: Ubuntu 4.8.0-26.28-generic 4.8.0
  Uname: Linux 4.8.0-26-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  ApportVersion: 2.20.3-0ubuntu8
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Tue Nov  1 10:48:35 2016
  InstallationDate: Installed on 2016-07-08 (115 days ago)
  InstallationMedia: Ubuntu 16.04 LTS "Xenial Xerus" - Release amd64 
(20160420.1)
  SourcePackage: avahi
  UpgradeStatus: Upgraded to yakkety on 2016-10-31 (0 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1638345/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-14 Thread Trent Lloyd
** Description changed:

- The bug, and workaround, are clearly described in this mailing list
- thread:
+ [Original Description]
+ The bug, and workaround, are clearly described in this mailing list thread:
  
  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html
  
  I'm trying to install MAAS in a LXD container, but that's failing due to
  avahi package install problems.  I'm tagging all packages here.
+ 
+ [Issue]
+ Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.
+ 
+ The fix is to remove these default rlimits completely from the
+ configuration file.
+ 
+ [Impact]
+ 
+  * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
+  * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345
+ 
+ [Test Case]
+ 
+  * setup lxd (apt install lxd, lxd init, get working networking)
+  * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true 
+  * lxc exec avahi-test sudo apt install avahi-daemon
+ 
+ This will fail if the parent host has avahi-daemon installed, however,
+ if it does not you can setup a second container (avahi-test2) and
+ install avahi there.  That should then fail (as the issue requires 2
+ copies of avahi-daemon in the same uid namespace to fail)
+ 
+ [Regression Potential]
+ 
+  * The fix removes all rlimits configured by avahi on startup, this is
+ an extra step avahi takes that most programs did not take (limiting
+ memory usage, running process count, etc).  It's possible an unknown bug
+ then consumes significant system resources as a result of that limit no
+ longer being in place, that was previously hidden by Avahi crashing
+ instead.  However I believe this risk is significantly reduced as this
+ change has been shipping upstream for many months and have not seen any
+ reports of new problems - however it has fixed a number of existing
+ crashes/problems.
+ 
+ [Other Info]
+  
+  * This change already exists upstream in 0.7 which is in bionic.  SRU 
required to artful, xenial, trusty.
+  * The main case this may not fix the issue is if they have modified their 
avahi-daemon.conf file - but it will fix new installs and most installs as most 
users don't modify the file.  And users may be prompted on upgrade to replace 
the file.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  In Progress
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.

  [Impact]

   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true 
   * lxc exec 

[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-14 Thread Trent Lloyd
** Patch added: "lp1661869-artful.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1661869/+attachment/5079949/+files/lp1661869-artful.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  In Progress
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.

  [Impact]

   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
   * lxc exec avahi-test sudo apt install avahi-daemon

  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)

  [Regression Potential]

   * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown
  bug then consumes significant system resources as a result of that
  limit no longer being in place, that was previously hidden by Avahi
  crashing instead.  However I believe this risk is significantly
  reduced as this change has been shipping upstream for many months and
  have not seen any reports of new problems - however it has fixed a
  number of existing crashes/problems.

  [Other Info]

   * This change already exists upstream in 0.7 which is in bionic.  SRU 
required to artful, xenial, trusty.
   * The main case this may not fix the issue is if they have modified their 
avahi-daemon.conf file - but it will fix new installs and most installs as most 
users don't modify the file.  And users may be prompted on upgrade to replace 
the file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-14 Thread Trent Lloyd
** Patch added: "lp1661869-xenial.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1661869/+attachment/5079950/+files/lp1661869-xenial.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  In Progress
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.

  [Impact]

   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
   * lxc exec avahi-test sudo apt install avahi-daemon

  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)

  [Regression Potential]

   * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown
  bug then consumes significant system resources as a result of that
  limit no longer being in place, that was previously hidden by Avahi
  crashing instead.  However I believe this risk is significantly
  reduced as this change has been shipping upstream for many months and
  have not seen any reports of new problems - however it has fixed a
  number of existing crashes/problems.

  [Other Info]

   * This change already exists upstream in 0.7 which is in bionic.  SRU 
required to artful, xenial, trusty.
   * The main case this may not fix the issue is if they have modified their 
avahi-daemon.conf file - but it will fix new installs and most installs as most 
users don't modify the file.  And users may be prompted on upgrade to replace 
the file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-14 Thread Trent Lloyd
Trusty is technically not directly affected by the container proc issue
as there was an Ubuntu patch dropped in xenial to skip setting rlimit-
nproc when /run/container_type=lxc

Could happen if that doesn't exist though, and the memory issue can
still occur, so still recommend upload.

** Description changed:

  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:
  
  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html
  
  I'm trying to install MAAS in a LXD container, but that's failing due to
  avahi package install problems.  I'm tagging all packages here.
  
  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.
  
  The fix is to remove these default rlimits completely from the
  configuration file.
  
  [Impact]
  
-  * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
-  * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345
+  * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
+  * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345
  
  [Test Case]
  
-  * setup lxd (apt install lxd, lxd init, get working networking)
-  * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true 
-  * lxc exec avahi-test sudo apt install avahi-daemon
+  * setup lxd (apt install lxd, lxd init, get working networking)
+  * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
+  * lxc exec avahi-test sudo apt install avahi-daemon
  
  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)
  
  [Regression Potential]
  
-  * The fix removes all rlimits configured by avahi on startup, this is
+  * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown bug
  then consumes significant system resources as a result of that limit no
  longer being in place, that was previously hidden by Avahi crashing
  instead.  However I believe this risk is significantly reduced as this
  change has been shipping upstream for many months and have not seen any
  reports of new problems - however it has fixed a number of existing
  crashes/problems.
  
  [Other Info]
-  
-  * This change already exists upstream in 0.7 which is in bionic.  SRU 
required to artful, xenial, trusty.
-  * The main case this may not fix the issue is if they have modified their 
avahi-daemon.conf file - but it will fix new installs and most installs as most 
users don't modify the file.  And users may be prompted on upgrade to replace 
the file.
+ 
+  * This change already exists upstream in 0.7 which is in bionic.  SRU 
required to artful, xenial, trusty.
+  * The main case this may not fix the issue is if they have modified their 
avahi-daemon.conf file - but it will fix new installs and most installs as most 
users don't modify the file.  And users may be prompted on upgrade to replace 
the file.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  In Progress
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermai

[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-14 Thread Trent Lloyd
** Patch added: "lp1661869-trusty.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1661869/+attachment/5079951/+files/lp1661869-trusty.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  In Progress
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.

  [Impact]

   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
   * lxc exec avahi-test sudo apt install avahi-daemon

  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)

  [Regression Potential]

   * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown
  bug then consumes significant system resources as a result of that
  limit no longer being in place, that was previously hidden by Avahi
  crashing instead.  However I believe this risk is significantly
  reduced as this change has been shipping upstream for many months and
  have not seen any reports of new problems - however it has fixed a
  number of existing crashes/problems.

  [Other Info]

   * This change already exists upstream in 0.7 which is in bionic.  SRU 
required to artful, xenial, trusty.
   * The main case this may not fix the issue is if they have modified their 
avahi-daemon.conf file - but it will fix new installs and most installs as most 
users don't modify the file.  And users may be prompted on upgrade to replace 
the file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1638345] Re: avahi-daemon crashes multiple times an hour

2018-03-14 Thread Trent Lloyd
I have updated Bug #1661869 with an SRU template and new updates to fix
this issue, plus the issue in that bug.

The fix is simply to update /etc/avahi/avahi-daemon.conf and comment out
the entire [rlimits] section.  You can do this yourself (but the package
update will do it for you).

It'd be great if you can try this fix yourself and report back on if
that fixes the issue for you, if not, please consider opening a new bug
using apport.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1638345

Title:
  avahi-daemon crashes multiple times an hour

Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  Ever since upgrading to 16.10, avahi-daemon crashes multiple times an
  hour, leading to the prompt to report the issue to Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.10
  Package: avahi-daemon 0.6.32-1ubuntu1
  ProcVersionSignature: Ubuntu 4.8.0-26.28-generic 4.8.0
  Uname: Linux 4.8.0-26-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  ApportVersion: 2.20.3-0ubuntu8
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Tue Nov  1 10:48:35 2016
  InstallationDate: Installed on 2016-07-08 (115 days ago)
  InstallationMedia: Ubuntu 16.04 LTS "Xenial Xerus" - Release amd64 
(20160420.1)
  SourcePackage: avahi
  UpgradeStatus: Upgraded to yakkety on 2016-10-31 (0 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1638345/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2018-03-15 Thread Trent Lloyd
Yeah the exact same change in the 0.7 release (rlimit section removal)
that is shipping in Bionic fixes the issue there, and the issue isn't
present in Bionic.

I also individually tested each of the trusty/xenial/artful packages
built from the supplied debdiffs to ensure the issue goes away after
upgrade to the rebuilt package - and it did with the exception of trusty
where the number of open files issue doesn't occur because of an Ubuntu
patch to not set that rlimit on LXC containers (where
/run/container_type=lxc).  However trusty still needs the rlimit-data
removal so I applied the exact same changes there.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  In Progress
Status in lxd package in Ubuntu:
  Invalid
Status in avahi source package in Trusty:
  New
Status in lxd source package in Trusty:
  New
Status in avahi source package in Xenial:
  New
Status in lxd source package in Xenial:
  New
Status in avahi source package in Artful:
  New
Status in lxd source package in Artful:
  New

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  [Issue]
  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.

  [Impact]

   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
service.
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
   * lxc exec avahi-test sudo apt install avahi-daemon

  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)

  [Regression Potential]

   * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown
  bug then consumes significant system resources as a result of that
  limit no longer being in place, that was previously hidden by Avahi
  crashing instead.  However I believe this risk is significantly
  reduced as this change has been shipping upstream for many months and
  have not seen any reports of new problems - however it has fixed a
  number of existing crashes/problems.

   * The main case this may not fix the issue is if they have modified
  their avahi-daemon.conf file - but it will fix new installs and most
  installs as most users don't modify the file. And users may be
  prompted on upgrade to replace the file.

  [Other Info]

   * This change already exists upstream in 0.7 which is in bionic.  SRU
  required to artful, xenial, trusty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-08-21 Thread Trent Lloyd
** Patch added: "lp1752411-avahi-host-timeout-bionic.patch"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+attachment/5178693/+files/lp1752411-avahi-host-timeout-bionic.patch

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in strongswan package in Ubuntu:
  Invalid
Status in avahi package in Debian:
  New

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-08-21 Thread Trent Lloyd
** Patch added: "lp1752411-avahi-host-timeout-cosmic.patch"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+attachment/5178692/+files/lp1752411-avahi-host-timeout-cosmic.patch

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in strongswan package in Ubuntu:
  Invalid
Status in avahi package in Debian:
  New

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-08-21 Thread Trent Lloyd
** Description changed:

+ [Impact]
+ 
+  * Network connections for some users fail (in some cases a direct
+ interface, in others when connecting a VPN) because the 'host' command
+ to check for .local in DNS called by /usr/lib/avahi/avahi-daemon-check-
+ dns.sh never times out like it should - leaving the script hanging
+ indefinitely blocking interface up and start-up. This appears to be a
+ bug in host caused in some circumstances however we implement a
+ workaround to call it under 'timeout' as the issue with 'host' has not
+ easily been identified, and in any case acts as a fall-back.
+ 
+ [Test Case]
+ 
+  * Multiple people have been unable to create a reproducer on a generic
+ machine (e.g. it does not occur in a VM), I have a specific machine I
+ can reproduce it on (a Skull Canyon NUC with Intel I219-LM) by simply
+ "ifdown br0; ifup br0" and there are clearly 10s of other users affected
+ in varying circumstances that all involve the same symptoms but no clear
+ test case exists. Best I can suggest is that I test the patch on my
+ system to ensure it works as expected, and the change is only 1 line
+ which is fairly easily auditible and understandable.
+ 
+ [Regression Potential]
+ 
+  * The change is a single line change to the shell script to call host with 
"timeout". When tested on working and non-working system this appears to 
function as expected. I believe the regression potential for this is 
subsequently low.
+  * In attempt to anticipate possible issues, I checked that the timeout 
command is in the same path (/usr/bin) as the host command that is already 
called without a path, and the coreutils package (which contains timeout) is an 
Essential package. I also checked that timeout is not a built-in in bash, for 
those that have changed /bin/sh to bash (just in case).
+ 
+ [Other Info]
+  
+  * N/A
+ 
+ [Original Bug Description]
+ 
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.
  
  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.
  
  Example showing ping attempt to the IP of DNS server:
  
- ~$ cat /etc/resolv.conf 
+ ~$ cat /etc/resolv.conf
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.
  
  nameserver 172.29.88.11
  nameserver 127.0.0.53
  
  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms
  
  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

** Patch removed: "lp1752411-avahi-host-timeout.diff"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+attachment/5117372/+files/lp1752411-avahi-host-timeout.diff

** Patch removed: "lp1752411-avahi-host-timeout-cosmic.patch"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+attachment/5178692/+files/lp1752411-avahi-host-timeout-cosmic.patch

** Patch removed: "lp1752411-avahi-host-timeout-bionic.patch"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+attachment/5178693/+files/lp1752411-avahi-host-timeout-bionic.patch

** Patch added: "lp1752411-avahi-host-timeout-bionic.patch"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+attachment/5178709/+files/lp1752411-avahi-host-timeout-bionic.patch

-- 
You received this bug n

[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-08-21 Thread Trent Lloyd
Request sponsorship of this upload for cosmic and then SRU to bionic
 - New debdiff uploaded for both bionic and cosmic
 - Fixed the SRU version for bionic
 - Added a comment about the workaround to the script
 - Updated bug description with SRU template

Tested patch working on bionic with my machine which consistently
exhibits the issue with a package built from this diff (albeit with a 5
second delay on network interface up, hopefully after this we can switch
to fixing the actual issue with host)

The key note I see on the machine I can reproduce this on (a linux
bridge over an Intel I219-LM) is that both the interface route and the
default route are in the 'linkdown' state when the host command fires
for about 0.7 seconds total. When I looked at a different machine, that
stage never happened or at least for a much shorter time (i'd have to
check ip monitor again).

I don't expect anyone to reproduce this for testing, i'm happy to test
the -proposed packages on an affected machine.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in strongswan package in Ubuntu:
  Invalid
Status in avahi package in Debian:
  New

Bug description:
  [Impact]

   * Network connections for some users fail (in some cases a direct
  interface, in others when connecting a VPN) because the 'host' command
  to check for .local in DNS called by /usr/lib/avahi/avahi-daemon-
  check-dns.sh never times out like it should - leaving the script
  hanging indefinitely blocking interface up and start-up. This appears
  to be a bug in host caused in some circumstances however we implement
  a workaround to call it under 'timeout' as the issue with 'host' has
  not easily been identified, and in any case acts as a fall-back.

  [Test Case]

   * Multiple people have been unable to create a reproducer on a
  generic machine (e.g. it does not occur in a VM), I have a specific
  machine I can reproduce it on (a Skull Canyon NUC with Intel I219-LM)
  by simply "ifdown br0; ifup br0" and there are clearly 10s of other
  users affected in varying circumstances that all involve the same
  symptoms but no clear test case exists. Best I can suggest is that I
  test the patch on my system to ensure it works as expected, and the
  change is only 1 line which is fairly easily auditible and
  understandable.

  [Regression Potential]

   * The change is a single line change to the shell script to call host with 
"timeout". When tested on working and non-working system this appears to 
function as expected. I believe the regression potential for this is 
subsequently low.
   * In attempt to anticipate possible issues, I checked that the timeout 
command is in the same path (/usr/bin) as the host command that is already 
called without a path, and the coreutils package (which contains timeout) is an 
Essential package. I also checked that timeout is not a built-in in bash, for 
those that have changed /bin/sh to bash (just in case).

  [Other Info]
   
   * N/A

  [Original Bug Description]

  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84)

[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-08-21 Thread Trent Lloyd
** Patch added: "lp1752411-avahi-host-timeout-cosmic.patch"
   
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+attachment/5178710/+files/lp1752411-avahi-host-timeout-cosmic.patch

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in strongswan package in Ubuntu:
  Invalid
Status in avahi package in Debian:
  New

Bug description:
  [Impact]

   * Network connections for some users fail (in some cases a direct
  interface, in others when connecting a VPN) because the 'host' command
  to check for .local in DNS called by /usr/lib/avahi/avahi-daemon-
  check-dns.sh never times out like it should - leaving the script
  hanging indefinitely blocking interface up and start-up. This appears
  to be a bug in host caused in some circumstances however we implement
  a workaround to call it under 'timeout' as the issue with 'host' has
  not easily been identified, and in any case acts as a fall-back.

  [Test Case]

   * Multiple people have been unable to create a reproducer on a
  generic machine (e.g. it does not occur in a VM), I have a specific
  machine I can reproduce it on (a Skull Canyon NUC with Intel I219-LM)
  by simply "ifdown br0; ifup br0" and there are clearly 10s of other
  users affected in varying circumstances that all involve the same
  symptoms but no clear test case exists. Best I can suggest is that I
  test the patch on my system to ensure it works as expected, and the
  change is only 1 line which is fairly easily auditible and
  understandable.

  [Regression Potential]

   * The change is a single line change to the shell script to call host with 
"timeout". When tested on working and non-working system this appears to 
function as expected. I believe the regression potential for this is 
subsequently low.
   * In attempt to anticipate possible issues, I checked that the timeout 
command is in the same path (/usr/bin) as the host command that is already 
called without a path, and the coreutils package (which contains timeout) is an 
Essential package. I also checked that timeout is not a built-in in bash, for 
those that have changed /bin/sh to bash (just in case).

  [Other Info]
   
   * N/A

  [Original Bug Description]

  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: http

[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-08-23 Thread Trent Lloyd
I agree with the sentiment that 5 seconds feels too long, however as a
workaround I decided I would just copy the existing timeout. I certainly
would not want to make it longer since this is in the critical boot
path.

I would generally agree that in general a DNS request should fail faster
however there are some cases where it won't, e.g. spanning tree bring up
on ports can take 2 seconds.

My hope is to correctly fix host after getting this in, since the impact
is very high for affected users.

This check may actually be able to go away, I believe both systemd-
resolved and libnss-mdns (latest version that I think is not in bionic)
implement the .local label checking to do this at runtime instead of
this old hack. So for cosmic+ we can probably get rid of this logic,
which always sucked anyway. As we only needed to really disable nss-mdns
and not avahi entirely (since apps should normally resolve the IPs using
avahi's API anyway, the impact to actual avahi usage is low).

Since the impact is high but only on a smaller subset of users, I think
we should go with matching the current timeout for now and worry about
further improvements later.

I've verified the cosmic upload is working as expected on a non-affected
system.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  In Progress
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in strongswan package in Ubuntu:
  Invalid
Status in avahi source package in Bionic:
  Triaged
Status in bind9 source package in Bionic:
  Confirmed
Status in avahi source package in Cosmic:
  In Progress
Status in bind9 source package in Cosmic:
  Confirmed
Status in avahi package in Debian:
  New

Bug description:
  [Impact]

   * Network connections for some users fail (in some cases a direct
  interface, in others when connecting a VPN) because the 'host' command
  to check for .local in DNS called by /usr/lib/avahi/avahi-daemon-
  check-dns.sh never times out like it should - leaving the script
  hanging indefinitely blocking interface up and start-up. This appears
  to be a bug in host caused in some circumstances however we implement
  a workaround to call it under 'timeout' as the issue with 'host' has
  not easily been identified, and in any case acts as a fall-back.

  [Test Case]

   * Multiple people have been unable to create a reproducer on a
  generic machine (e.g. it does not occur in a VM), I have a specific
  machine I can reproduce it on (a Skull Canyon NUC with Intel I219-LM)
  by simply "ifdown br0; ifup br0" and there are clearly 10s of other
  users affected in varying circumstances that all involve the same
  symptoms but no clear test case exists. Best I can suggest is that I
  test the patch on my system to ensure it works as expected, and the
  change is only 1 line which is fairly easily auditible and
  understandable.

  [Regression Potential]

   * The change is a single line change to the shell script to call host with 
"timeout". When tested on working and non-working system this appears to 
function as expected. I believe the regression potential for this is 
subsequently low.
   * In attempt to anticipate possible issues, I checked that the timeout 
command is in the same path (/usr/bin) as the host command that is already 
called without a path, and the coreutils package (which contains timeout) is an 
Essential package. I also checked that timeout is not a built-in in bash, for 
those that have changed /bin/sh to bash (just in case).

  [Other Info]
   
   * N/A

  [Original Bug Description]

  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0

[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-08-23 Thread Trent Lloyd
> When host call fails (even with timeout), it returns "1" claiming
"dns_has_local()=true".

0 = true, 1 = false (you implied the opposite)

What may add confusion here is the grep -vq check is like an extra check
to make sure host didn't return 0 (success = we found .local) but then
say 'not found' anyway. So it returns 0 (true) when host returns 0. It
returns 1 when host returns anything else (including timeout); 1 = false
which means leave avahi enabled.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  In Progress
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in strongswan package in Ubuntu:
  Invalid
Status in avahi source package in Bionic:
  Triaged
Status in bind9 source package in Bionic:
  Confirmed
Status in avahi source package in Cosmic:
  In Progress
Status in bind9 source package in Cosmic:
  Confirmed
Status in avahi package in Debian:
  New

Bug description:
  [Impact]

   * Network connections for some users fail (in some cases a direct
  interface, in others when connecting a VPN) because the 'host' command
  to check for .local in DNS called by /usr/lib/avahi/avahi-daemon-
  check-dns.sh never times out like it should - leaving the script
  hanging indefinitely blocking interface up and start-up. This appears
  to be a bug in host caused in some circumstances however we implement
  a workaround to call it under 'timeout' as the issue with 'host' has
  not easily been identified, and in any case acts as a fall-back.

  [Test Case]

   * Multiple people have been unable to create a reproducer on a
  generic machine (e.g. it does not occur in a VM), I have a specific
  machine I can reproduce it on (a Skull Canyon NUC with Intel I219-LM)
  by simply "ifdown br0; ifup br0" and there are clearly 10s of other
  users affected in varying circumstances that all involve the same
  symptoms but no clear test case exists. Best I can suggest is that I
  test the patch on my system to ensure it works as expected, and the
  change is only 1 line which is fairly easily auditible and
  understandable.

  [Regression Potential]

   * The change is a single line change to the shell script to call host with 
"timeout". When tested on working and non-working system this appears to 
function as expected. I believe the regression potential for this is 
subsequently low.
   * In attempt to anticipate possible issues, I checked that the timeout 
command is in the same path (/usr/bin) as the host command that is already 
called without a path, and the coreutils package (which contains timeout) is an 
Essential package. I also checked that timeout is not a built-in in bash, for 
those that have changed /bin/sh to bash (just in case).

  [Other Info]
   
   * N/A

  [Original Bug Description]

  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersio

[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-09-03 Thread Trent Lloyd
** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Fix Released
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in strongswan package in Ubuntu:
  Invalid
Status in avahi source package in Bionic:
  Fix Committed
Status in bind9 source package in Bionic:
  Confirmed
Status in avahi source package in Cosmic:
  Fix Released
Status in bind9 source package in Cosmic:
  Confirmed
Status in avahi package in Debian:
  New

Bug description:
  [Impact]

   * Network connections for some users fail (in some cases a direct
  interface, in others when connecting a VPN) because the 'host' command
  to check for .local in DNS called by /usr/lib/avahi/avahi-daemon-
  check-dns.sh never times out like it should - leaving the script
  hanging indefinitely blocking interface up and start-up. This appears
  to be a bug in host caused in some circumstances however we implement
  a workaround to call it under 'timeout' as the issue with 'host' has
  not easily been identified, and in any case acts as a fall-back.

  [Test Case]

   * Multiple people have been unable to create a reproducer on a
  generic machine (e.g. it does not occur in a VM), I have a specific
  machine I can reproduce it on (a Skull Canyon NUC with Intel I219-LM)
  by simply "ifdown br0; ifup br0" and there are clearly 10s of other
  users affected in varying circumstances that all involve the same
  symptoms but no clear test case exists. Best I can suggest is that I
  test the patch on my system to ensure it works as expected, and the
  change is only 1 line which is fairly easily auditible and
  understandable.

  [Regression Potential]

   * The change is a single line change to the shell script to call host with 
"timeout". When tested on working and non-working system this appears to 
function as expected. I believe the regression potential for this is 
subsequently low.
   * In attempt to anticipate possible issues, I checked that the timeout 
command is in the same path (/usr/bin) as the host command that is already 
called without a path, and the coreutils package (which contains timeout) is an 
Essential package. I also checked that timeout is not a built-in in bash, for 
those that have changed /bin/sh to bash (just in case).

  [Other Info]
   
   * N/A

  [Original Bug Description]

  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about

[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-09-03 Thread Trent Lloyd
Confirmed the fix works on my affected system, after upgrade to
0.7-3.1ubuntu1.1 from bionic-proposed and a system reboot the boot works
(relatively) quickly as expected and doesn't get stuck.

Verified the file in place is the original from the package (and not one
modified by me).

Lastly to further verify it was the timeout and not some other change, I
changed the timeout from 5 to 17 (to make the time taken more obvious)
and rebooted and compared the time spent in network.service with
systemd-analyze which changed from 18 seconds (with timeout=5) to
30.368s (with timeout=17). Which is roughly an additional 12 seconds as
expected. And also observed the time before SSH was ready to take that
much extra.

Looks good to me, changed the bionic verification tag to done but left
'verification-needed'.

** Changed in: avahi (Ubuntu Bionic)
   Importance: Undecided => High

** Changed in: avahi (Ubuntu Bionic)
 Assignee: (unassigned) => Trent Lloyd (lathiat)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Fix Released
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in strongswan package in Ubuntu:
  Invalid
Status in avahi source package in Bionic:
  Fix Committed
Status in bind9 source package in Bionic:
  Confirmed
Status in avahi source package in Cosmic:
  Fix Released
Status in bind9 source package in Cosmic:
  Confirmed
Status in avahi package in Debian:
  New

Bug description:
  [Impact]

   * Network connections for some users fail (in some cases a direct
  interface, in others when connecting a VPN) because the 'host' command
  to check for .local in DNS called by /usr/lib/avahi/avahi-daemon-
  check-dns.sh never times out like it should - leaving the script
  hanging indefinitely blocking interface up and start-up. This appears
  to be a bug in host caused in some circumstances however we implement
  a workaround to call it under 'timeout' as the issue with 'host' has
  not easily been identified, and in any case acts as a fall-back.

  [Test Case]

   * Multiple people have been unable to create a reproducer on a
  generic machine (e.g. it does not occur in a VM), I have a specific
  machine I can reproduce it on (a Skull Canyon NUC with Intel I219-LM)
  by simply "ifdown br0; ifup br0" and there are clearly 10s of other
  users affected in varying circumstances that all involve the same
  symptoms but no clear test case exists. Best I can suggest is that I
  test the patch on my system to ensure it works as expected, and the
  change is only 1 line which is fairly easily auditible and
  understandable.

  [Regression Potential]

   * The change is a single line change to the shell script to call host with 
"timeout". When tested on working and non-working system this appears to 
function as expected. I believe the regression potential for this is 
subsequently low.
   * In attempt to anticipate possible issues, I checked that the timeout 
command is in the same path (/usr/bin) as the host command that is already 
called without a path, and the coreutils package (which contains timeout) is an 
Essential package. I also checked that timeout is not a built-in in bash, for 
those that have changed /bin/sh to bash (just in case).

  [Other Info]
   
   * N/A

  [Original Bug Description]

  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.

[Touch-packages] [Bug 1794229] [NEW] python packages should not ship colliding /usr/lib/python3/dist-packages/.pytest_cache

2018-09-24 Thread Trent Lloyd
Public bug reported:

Python packages should not ship /usr/lib/python3/dist-
packages/.pytest_cache

Recently, two packages python3-alembic and python3-astroid both shipped
these files which collided on package install

dh-python has been improved upstream to stream all hidden "dot"
directories in Debian #907871

This change should be imported and both alembic and astroid should be
rebuilt (plus any other packages shipping this file)


Preparing to unpack .../178-python3-astroid_2.0.4-1_all.deb ...
Unpacking python3-astroid (2.0.4-1) over (1.6.5-1ubuntu4) ...
dpkg: error processing archive 
/tmp/apt-dpkg-install-rbF4wE/178-python3-astroid_2.0.4-1_all.deb (--unpack):
 trying to overwrite 
'/usr/lib/python3/dist-packages/.pytest_cache/v/cache/nodeids', which is also 
in package python3-alembic 1.0.0-1ubuntu1
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)

** Affects: alembic (Ubuntu)
 Importance: Critical
 Status: New

** Affects: astroid (Ubuntu)
 Importance: Critical
 Status: New

** Affects: dh-python (Ubuntu)
 Importance: Critical
 Status: New

** Affects: debian
 Importance: Unknown
 Status: Unknown

** Changed in: dh-python (Ubuntu)
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to dh-python in Ubuntu.
https://bugs.launchpad.net/bugs/1794229

Title:
  python packages should not ship colliding /usr/lib/python3/dist-
  packages/.pytest_cache

Status in alembic package in Ubuntu:
  New
Status in astroid package in Ubuntu:
  New
Status in dh-python package in Ubuntu:
  New
Status in Debian:
  Unknown

Bug description:
  Python packages should not ship /usr/lib/python3/dist-
  packages/.pytest_cache

  Recently, two packages python3-alembic and python3-astroid both
  shipped these files which collided on package install

  dh-python has been improved upstream to stream all hidden "dot"
  directories in Debian #907871

  This change should be imported and both alembic and astroid should be
  rebuilt (plus any other packages shipping this file)


  Preparing to unpack .../178-python3-astroid_2.0.4-1_all.deb ...
  Unpacking python3-astroid (2.0.4-1) over (1.6.5-1ubuntu4) ...
  dpkg: error processing archive 
/tmp/apt-dpkg-install-rbF4wE/178-python3-astroid_2.0.4-1_all.deb (--unpack):
   trying to overwrite 
'/usr/lib/python3/dist-packages/.pytest_cache/v/cache/nodeids', which is also 
in package python3-alembic 1.0.0-1ubuntu1
  dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alembic/+bug/1794229/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1794229] Re: python packages should not ship colliding /usr/lib/python3/dist-packages/.pytest_cache

2018-09-24 Thread Trent Lloyd
It appears these packages are in main but mainly pulled in (at least on
my system) by 'devscripts'. This affects cosmic.

** Also affects: alembic (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: astroid (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: astroid (Ubuntu)
   Importance: Undecided => Critical

** Changed in: alembic (Ubuntu)
   Importance: Undecided => Critical

** Bug watch added: Debian Bug tracker #907871
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=907871

** Also affects: debian via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=907871
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to dh-python in Ubuntu.
https://bugs.launchpad.net/bugs/1794229

Title:
  python packages should not ship colliding /usr/lib/python3/dist-
  packages/.pytest_cache

Status in alembic package in Ubuntu:
  New
Status in astroid package in Ubuntu:
  New
Status in dh-python package in Ubuntu:
  New
Status in Debian:
  Unknown

Bug description:
  Python packages should not ship /usr/lib/python3/dist-
  packages/.pytest_cache

  Recently, two packages python3-alembic and python3-astroid both
  shipped these files which collided on package install

  dh-python has been improved upstream to stream all hidden "dot"
  directories in Debian #907871

  This change should be imported and both alembic and astroid should be
  rebuilt (plus any other packages shipping this file)


  Preparing to unpack .../178-python3-astroid_2.0.4-1_all.deb ...
  Unpacking python3-astroid (2.0.4-1) over (1.6.5-1ubuntu4) ...
  dpkg: error processing archive 
/tmp/apt-dpkg-install-rbF4wE/178-python3-astroid_2.0.4-1_all.deb (--unpack):
   trying to overwrite 
'/usr/lib/python3/dist-packages/.pytest_cache/v/cache/nodeids', which is also 
in package python3-alembic 1.0.0-1ubuntu1
  dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alembic/+bug/1794229/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1794229] Re: python packages should not ship colliding /usr/lib/python3/dist-packages/.pytest_cache

2018-09-24 Thread Trent Lloyd
Looks like the recent astroid rebuild/changes removed the file, but it's
still present in alembic

** Changed in: alembic (Ubuntu)
   Importance: Critical => High

** Changed in: astroid (Ubuntu)
   Importance: Critical => High

** Changed in: dh-python (Ubuntu)
   Importance: Critical => High

** Changed in: astroid (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to dh-python in Ubuntu.
https://bugs.launchpad.net/bugs/1794229

Title:
  python packages should not ship colliding /usr/lib/python3/dist-
  packages/.pytest_cache

Status in alembic package in Ubuntu:
  New
Status in astroid package in Ubuntu:
  Invalid
Status in dh-python package in Ubuntu:
  New
Status in Debian:
  Unknown

Bug description:
  Python packages should not ship /usr/lib/python3/dist-
  packages/.pytest_cache

  Recently, two packages python3-alembic and python3-astroid both
  shipped these files which collided on package install

  dh-python has been improved upstream to stream all hidden "dot"
  directories in Debian #907871

  This change should be imported and both alembic and astroid should be
  rebuilt (plus any other packages shipping this file)


  Preparing to unpack .../178-python3-astroid_2.0.4-1_all.deb ...
  Unpacking python3-astroid (2.0.4-1) over (1.6.5-1ubuntu4) ...
  dpkg: error processing archive 
/tmp/apt-dpkg-install-rbF4wE/178-python3-astroid_2.0.4-1_all.deb (--unpack):
   trying to overwrite 
'/usr/lib/python3/dist-packages/.pytest_cache/v/cache/nodeids', which is also 
in package python3-alembic 1.0.0-1ubuntu1
  dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alembic/+bug/1794229/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1099184] Re: Avahi reports IPv6/IPv4 and "dnssd" CUPS backend only tries IPv6, no IPv4 fallback

2018-10-02 Thread Trent Lloyd
Hi Nim (nimpetro),

You've added a comment to an existing bug, which dopes not appear to be
related to the problem you are having. This bug is about network issues
with CUPS connecting to printers and not related to the page print
formatting being incorrect.

Please either open a new bug for your issue or seek support through our forums 
or another public resource. You can find a list of such community support 
resources here (load the page then click "Find community support")
https://www.ubuntu.com/support/community-support

I would suggest they might be a better place to start than a new bug.
Off hand, if I had to guess I would suggest checking your page size is
set correctly when printing, it's possible it does not match your actual
paper. Not sure about the 90 degree rotation please. But please don't
reply to that guess in this bug and take information about that to a new
thread on the forums/discourse.

Regards,
Trent

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1099184

Title:
  Avahi reports IPv6/IPv4 and "dnssd" CUPS backend only tries IPv6, no
  IPv4 fallback

Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  lsb_release -rd
  Description:Ubuntu 12.04.1 LTS
  Release:12.04
  -
  avahi-daemon 0.6.30-5ubuntu2
  cups 1.5.3-0ubuntu6
  

  
  Avahi for  default publishes services also in IPv6 as in 
/etc/avahi/avahi-daemon.conf there is
  use-ipv6=yes

  At the same time services are listening on IPv4 addresses only.

  Cups is able to discover published printers via dnssd protocol but its
  ipp backend prints only on the IPv6 resolved address which does not
  answer and it does not try the IPv4 address.

  
  Excerpt from "avahi-browse -a -t -r"  to show the IPv6 and IPv4 addresses

  +  wlan0 IPv6 Hewlett-Packard HP LaserJet 1100 @ hjemme Internet Printer  
   local
  +  wlan0 IPv4 Hewlett-Packard HP LaserJet 1100 @ hjemme Internet Printer  
   local
  +  wlan0 IPv6 MobilIgor5a [00:26:82:9d:b9:6d]   Workstation   
   local
  +  wlan0 IPv6 hjemme [00:40:f4:ff:55:32]Workstation   
   local

  []

  =  wlan0 IPv6 Hewlett-Packard HP LaserJet 1100 @ hjemme Internet Printer  
   local
 hostname = [hjemme.local]
 address = [fe80::240:f4ff:feff:5532]
 port = [631]
 txt = ["printer-type=0x900E" "printer-state=3" "Scan=F" "Sort=F" "Bind=F" 
"Punch=F" "Collate=F" "Copies=F" "Staple=F" "Duplex=F" "Color=T" "Fax=F" 
"Binary=F" "Transparent=F" "TLS=1.2" 
"UUID=1577ef95-eecb-3eae-6d02-aa29003eb546" "URF=DM3" 
"pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/urf"
 "product=(HP LaserJet 1100xi Printer)" "priority=0" "note=hjemme" 
"adminurl=http://hjemme.local:631/printers/Hewlett-Packard-HP-LaserJet-1100"; 
"ty=HP LaserJet 1100, hpcups 3.12.2" 
"rp=printers/Hewlett-Packard-HP-LaserJet-1100" "qtotal=1" "txtvers=1"]
  =  wlan0 IPv6 hjemmeRemote Disk 
Management local
 hostname = [hjemme.local]
 address = [fe80::240:f4ff:feff:5532]
 port = [22]
 txt = []
  =  wlan0 IPv6 hjemme [00:40:f4:ff:55:32]Workstation   
   local
 hostname = [hjemme.local]
 address = [fe80::240:f4ff:feff:5532]
 port = [9]
 txt = []
  =  wlan0 IPv4 Hewlett-Packard HP LaserJet 1100 @ hjemme Internet Printer  
   local
 hostname = [hjemme.local]
 address = [192.168.1.2]
 port = [631]
 txt = ["printer-type=0x900E" "printer-state=3" "Scan=F" "Sort=F" "Bind=F" 
"Punch=F" "Collate=F" "Copies=F" "Staple=F" "Duplex=F" "Color=T" "Fax=F" 
"Binary=F" "Transparent=F" "TLS=1.2" 
"UUID=1577ef95-eecb-3eae-6d02-aa29003eb546" "URF=DM3" 
"pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/urf"
 "product=(HP LaserJet 1100xi Printer)" "priority=0" "note=hjemme" 
"adminurl=http://hjemme.local:631/printers/Hewlett-Packard-HP-LaserJet-1100"; 
"ty=HP LaserJet 1100, hpcups 3.12.2" 
"rp=printers/Hewlett-Packard-HP-LaserJet-1100" "qtotal=1" "txtvers=1"]

  
  Excerpt from /var/log/cups/error_log to show that only IPv6 address is tried

  I [13/Jan/2013:18:02:55 +0100] [Job 61] Started backend 
/usr/lib/cups/backend/dnssd (PID 7179)
  D [13/Jan/2013:18:02:55 +0100] [Job 61] Resolving "Hewlett-Packard HP 
LaserJet 1100 @ hjemme._ipp._tcp.local"...
  D [13/Jan/2013:18:02:55 +0100] [Job 61] STATE: +connecting-to-device
  D [13/Jan/2013:18:02:55 +0100] [Job 61] Resolving "Hewlett-Packard HP 
LaserJet 1100 @ hjemme", regtype="_ipp._tcp", domain="local."...
  D [13/Jan/2013:18:02:55 +0100] [Job 61] PPD uses qualifier 'Gray.Plain.'
  D [13/Jan/2013:18:02:55 +0100] [Job 61] Calling 
FindDeviceById(LaserJet_1100xi_Printer)
  D [13/Jan/2013:18:02

[Touch-packages] [Bug 1783272] Re: upgrading systemd package restarts systemd-networkd and briefly takes down network interfaces

2018-07-24 Thread Trent Lloyd
** Attachment added: "log of terminal session during upgrade"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1783272/+attachment/5167023/+files/apt-upgrade-console-log.txt

** Description changed:

  Upgrading the systemd package, which contains systemd-networkd, appears
  to restart networkd and subsequently reconfigure network interfaces
  causing a brief connectivity outage.
  
  This is a bionic system which has a network bridge as it's primary
  interface through netplan.
  
  You can see from the logs that the interface appears to have been briefly 
taken down
  > Jul 24 09:40:32 optane kernel: [ 1935.046068] br0: port 1(eno1) entered 
disabled state
  
  We also see logs of networkd restarting
- > 248 Jul 24 09:40:32 optane systemd[1]: Stopping Network Service...  

- > 253 Jul 24 09:40:32 optane systemd[1]: Stopped Network Service. 

- > 254 Jul 24 09:40:32 optane systemd[1]: Starting Network Service... 
+ > 248 Jul 24 09:40:32 optane systemd[1]: Stopping Network Service...
+ > 253 Jul 24 09:40:32 optane systemd[1]: Stopped Network Service.
+ > 254 Jul 24 09:40:32 optane systemd[1]: Starting Network Service...
  
  Based on the ordering of the messages from Avahi, I also believe that
  the IP address was first removed, then it was taken down, brought back
  up, and the IP re-added. But I can not state that with 100% certainty.
  However if you just set an interface down manually (ip link set br0
  down), usually Avahi notes the interface is relevant before the address
  was removed.  We see the opposite here.  It's possible the ordering is
  just not entirely deterministic though.
  
  Jul 24 09:40:32 optane avahi-daemon[1611]: Withdrawing address record for 
10.48.134.22 on br0.
  Jul 24 09:40:32 optane avahi-daemon[1611]: Interface br0.IPv4 no longer 
relevant for mDNS.
  
- 
- The main reason I noticed this, is that the unofficial oracle-java8-installer 
package upgraded at the same time - and it's wget to download java failed due 
to "Network is unreachable" as it was upgraded as the same time. To be clear, 
I'm not suggesting necessarily that this specific package was affected is the 
bug, but it's the reason I noticed the restart and it did cause an upgrade 
failure which I resumed with "dpkg --configure -a".  However there are many 
other implications of the network interface being reconfigured.
+ The main reason I noticed this, is that the unofficial oracle-
+ java8-installer package upgraded at the same time - and it's wget to
+ download java failed due to "Network is unreachable" as it was upgraded
+ as the same time. To be clear, I'm not suggesting necessarily that this
+ specific package was affected is the bug, but it's the reason I noticed
+ the restart and it did cause an upgrade failure which I resumed with
+ "dpkg --configure -a".  However there are many other implications of the
+ network interface being reconfigured.
  
  Generally it's probably just not ideal to have networkd restart and
  reconfigure the network interfaces - no matter the possible causes. But
- from a quick though about realistic  implications
+ from a quick thought about realistic implications
  
+  - it seems the bridge was not deleted/re-created which would be bad for
+ systems using libvirt/lxd/etc where the VMs may drop off the bridge. The
+ interface index didn't change so hopefully we're safe from this one
+ ("Jul 24 09:40:32 optane systemd-networkd[17118]: br0: netdev exists,
+ using existing without changing its parameters")
  
-  - it seems the bridge was not deleted/re-created which would be bad for 
systems using libvirt/lxd/etc where the VMs may drop off the bridge. The 
interface index didn't change so hopefully we're safe from this one ("Jul 24 
09:40:32 optane systemd-networkd[17118]: br0: netdev exists, using existing 
without changing its parameters")
- 
-  - Another use case other than oracle-java8-installer that is highly
+  - Another use case other than oracle-java8-installer that is highly
  likely to be impacted is daemons configured to bind to a specific IP
  address. By default, those binds will fail if the IP address doesn't
  exist. It's possible these two restarts will race and those services
  will fail to restart. An example where this could happen, is Apache2. It
  would probably be hard to reproduce but logically likely to occur in
  some small number of cases.
+ 
+  - we potentially do want networkd to ideally restart to upgrade the
+ code, but ideally it would "diff" the network interface config and not
+ tear things down. I am using a 'switchport' match in my netplan config,
+ I wonder if this is related?

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1783272

Title:
  upgrading systemd package restarts systemd-networkd and briefly takes
  down network interfaces

Status in systemd package in Ubuntu:
  New

[Touch-packages] [Bug 1783272] [NEW] upgrading systemd package restarts systemd-networkd and briefly takes down network interfaces

2018-07-24 Thread Trent Lloyd
Public bug reported:

Upgrading the systemd package, which contains systemd-networkd, appears
to restart networkd and subsequently reconfigure network interfaces
causing a brief connectivity outage.

This is a bionic system which has a network bridge as it's primary
interface through netplan.

You can see from the logs that the interface appears to have been briefly taken 
down
> Jul 24 09:40:32 optane kernel: [ 1935.046068] br0: port 1(eno1) entered 
> disabled state

We also see logs of networkd restarting
> 248 Jul 24 09:40:32 optane systemd[1]: Stopping Network Service...
> 253 Jul 24 09:40:32 optane systemd[1]: Stopped Network Service.
> 254 Jul 24 09:40:32 optane systemd[1]: Starting Network Service...

Based on the ordering of the messages from Avahi, I also believe that
the IP address was first removed, then it was taken down, brought back
up, and the IP re-added. But I can not state that with 100% certainty.
However if you just set an interface down manually (ip link set br0
down), usually Avahi notes the interface is relevant before the address
was removed.  We see the opposite here.  It's possible the ordering is
just not entirely deterministic though.

Jul 24 09:40:32 optane avahi-daemon[1611]: Withdrawing address record for 
10.48.134.22 on br0.
Jul 24 09:40:32 optane avahi-daemon[1611]: Interface br0.IPv4 no longer 
relevant for mDNS.

The main reason I noticed this, is that the unofficial oracle-
java8-installer package upgraded at the same time - and it's wget to
download java failed due to "Network is unreachable" as it was upgraded
as the same time. To be clear, I'm not suggesting necessarily that this
specific package was affected is the bug, but it's the reason I noticed
the restart and it did cause an upgrade failure which I resumed with
"dpkg --configure -a".  However there are many other implications of the
network interface being reconfigured.

Generally it's probably just not ideal to have networkd restart and
reconfigure the network interfaces - no matter the possible causes. But
from a quick thought about realistic implications

 - it seems the bridge was not deleted/re-created which would be bad for
systems using libvirt/lxd/etc where the VMs may drop off the bridge. The
interface index didn't change so hopefully we're safe from this one
("Jul 24 09:40:32 optane systemd-networkd[17118]: br0: netdev exists,
using existing without changing its parameters")

 - Another use case other than oracle-java8-installer that is highly
likely to be impacted is daemons configured to bind to a specific IP
address. By default, those binds will fail if the IP address doesn't
exist. It's possible these two restarts will race and those services
will fail to restart. An example where this could happen, is Apache2. It
would probably be hard to reproduce but logically likely to occur in
some small number of cases.

 - we potentially do want networkd to ideally restart to upgrade the
code, but ideally it would "diff" the network interface config and not
tear things down. I am using a 'switchport' match in my netplan config,
I wonder if this is related?

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1783272

Title:
  upgrading systemd package restarts systemd-networkd and briefly takes
  down network interfaces

Status in systemd package in Ubuntu:
  New

Bug description:
  Upgrading the systemd package, which contains systemd-networkd,
  appears to restart networkd and subsequently reconfigure network
  interfaces causing a brief connectivity outage.

  This is a bionic system which has a network bridge as it's primary
  interface through netplan.

  You can see from the logs that the interface appears to have been briefly 
taken down
  > Jul 24 09:40:32 optane kernel: [ 1935.046068] br0: port 1(eno1) entered 
disabled state

  We also see logs of networkd restarting
  > 248 Jul 24 09:40:32 optane systemd[1]: Stopping Network Service...
  > 253 Jul 24 09:40:32 optane systemd[1]: Stopped Network Service.
  > 254 Jul 24 09:40:32 optane systemd[1]: Starting Network Service...

  Based on the ordering of the messages from Avahi, I also believe that
  the IP address was first removed, then it was taken down, brought back
  up, and the IP re-added. But I can not state that with 100% certainty.
  However if you just set an interface down manually (ip link set br0
  down), usually Avahi notes the interface is relevant before the
  address was removed.  We see the opposite here.  It's possible the
  ordering is just not entirely deterministic though.

  Jul 24 09:40:32 optane avahi-daemon[1611]: Withdrawing address record for 
10.48.134.22 on br0.
  Jul 24 09:40:32 optane avahi-daemon[1611]: Interface br0.IPv4 no longer 
relevant for mDNS.

  The main reason I noticed this, is that the unofficial oracle-
  java

[Touch-packages] [Bug 1783272] Re: upgrading systemd package restarts systemd-networkd and briefly takes down network interfaces

2018-07-24 Thread Trent Lloyd
** Attachment added: "netplan config"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1783272/+attachment/5167024/+files/01-netcfg.yaml

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1783272

Title:
  upgrading systemd package restarts systemd-networkd and briefly takes
  down network interfaces

Status in systemd package in Ubuntu:
  New

Bug description:
  Upgrading the systemd package, which contains systemd-networkd,
  appears to restart networkd and subsequently reconfigure network
  interfaces causing a brief connectivity outage.

  This is a bionic system which has a network bridge as it's primary
  interface through netplan.

  You can see from the logs that the interface appears to have been briefly 
taken down
  > Jul 24 09:40:32 optane kernel: [ 1935.046068] br0: port 1(eno1) entered 
disabled state

  We also see logs of networkd restarting
  > 248 Jul 24 09:40:32 optane systemd[1]: Stopping Network Service...
  > 253 Jul 24 09:40:32 optane systemd[1]: Stopped Network Service.
  > 254 Jul 24 09:40:32 optane systemd[1]: Starting Network Service...

  Based on the ordering of the messages from Avahi, I also believe that
  the IP address was first removed, then it was taken down, brought back
  up, and the IP re-added. But I can not state that with 100% certainty.
  However if you just set an interface down manually (ip link set br0
  down), usually Avahi notes the interface is relevant before the
  address was removed.  We see the opposite here.  It's possible the
  ordering is just not entirely deterministic though.

  Jul 24 09:40:32 optane avahi-daemon[1611]: Withdrawing address record for 
10.48.134.22 on br0.
  Jul 24 09:40:32 optane avahi-daemon[1611]: Interface br0.IPv4 no longer 
relevant for mDNS.

  The main reason I noticed this, is that the unofficial oracle-
  java8-installer package upgraded at the same time - and it's wget to
  download java failed due to "Network is unreachable" as it was
  upgraded as the same time. To be clear, I'm not suggesting necessarily
  that this specific package was affected is the bug, but it's the
  reason I noticed the restart and it did cause an upgrade failure which
  I resumed with "dpkg --configure -a".  However there are many other
  implications of the network interface being reconfigured.

  Generally it's probably just not ideal to have networkd restart and
  reconfigure the network interfaces - no matter the possible causes.
  But from a quick thought about realistic implications

   - it seems the bridge was not deleted/re-created which would be bad
  for systems using libvirt/lxd/etc where the VMs may drop off the
  bridge. The interface index didn't change so hopefully we're safe from
  this one ("Jul 24 09:40:32 optane systemd-networkd[17118]: br0: netdev
  exists, using existing without changing its parameters")

   - Another use case other than oracle-java8-installer that is highly
  likely to be impacted is daemons configured to bind to a specific IP
  address. By default, those binds will fail if the IP address doesn't
  exist. It's possible these two restarts will race and those services
  will fail to restart. An example where this could happen, is Apache2.
  It would probably be hard to reproduce but logically likely to occur
  in some small number of cases.

   - we potentially do want networkd to ideally restart to upgrade the
  code, but ideally it would "diff" the network interface config and not
  tear things down. I am using a 'switchport' match in my netplan
  config, I wonder if this is related?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1783272/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1783272] Re: upgrading systemd package restarts systemd-networkd and briefly takes down network interfaces

2018-07-24 Thread Trent Lloyd
** Attachment added: "syslog"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1783272/+attachment/5167022/+files/syslog

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1783272

Title:
  upgrading systemd package restarts systemd-networkd and briefly takes
  down network interfaces

Status in systemd package in Ubuntu:
  New

Bug description:
  Upgrading the systemd package, which contains systemd-networkd,
  appears to restart networkd and subsequently reconfigure network
  interfaces causing a brief connectivity outage.

  This is a bionic system which has a network bridge as it's primary
  interface through netplan.

  You can see from the logs that the interface appears to have been briefly 
taken down
  > Jul 24 09:40:32 optane kernel: [ 1935.046068] br0: port 1(eno1) entered 
disabled state

  We also see logs of networkd restarting
  > 248 Jul 24 09:40:32 optane systemd[1]: Stopping Network Service...
  > 253 Jul 24 09:40:32 optane systemd[1]: Stopped Network Service.
  > 254 Jul 24 09:40:32 optane systemd[1]: Starting Network Service...

  Based on the ordering of the messages from Avahi, I also believe that
  the IP address was first removed, then it was taken down, brought back
  up, and the IP re-added. But I can not state that with 100% certainty.
  However if you just set an interface down manually (ip link set br0
  down), usually Avahi notes the interface is relevant before the
  address was removed.  We see the opposite here.  It's possible the
  ordering is just not entirely deterministic though.

  Jul 24 09:40:32 optane avahi-daemon[1611]: Withdrawing address record for 
10.48.134.22 on br0.
  Jul 24 09:40:32 optane avahi-daemon[1611]: Interface br0.IPv4 no longer 
relevant for mDNS.

  The main reason I noticed this, is that the unofficial oracle-
  java8-installer package upgraded at the same time - and it's wget to
  download java failed due to "Network is unreachable" as it was
  upgraded as the same time. To be clear, I'm not suggesting necessarily
  that this specific package was affected is the bug, but it's the
  reason I noticed the restart and it did cause an upgrade failure which
  I resumed with "dpkg --configure -a".  However there are many other
  implications of the network interface being reconfigured.

  Generally it's probably just not ideal to have networkd restart and
  reconfigure the network interfaces - no matter the possible causes.
  But from a quick thought about realistic implications

   - it seems the bridge was not deleted/re-created which would be bad
  for systems using libvirt/lxd/etc where the VMs may drop off the
  bridge. The interface index didn't change so hopefully we're safe from
  this one ("Jul 24 09:40:32 optane systemd-networkd[17118]: br0: netdev
  exists, using existing without changing its parameters")

   - Another use case other than oracle-java8-installer that is highly
  likely to be impacted is daemons configured to bind to a specific IP
  address. By default, those binds will fail if the IP address doesn't
  exist. It's possible these two restarts will race and those services
  will fail to restart. An example where this could happen, is Apache2.
  It would probably be hard to reproduce but logically likely to occur
  in some small number of cases.

   - we potentially do want networkd to ideally restart to upgrade the
  code, but ideally it would "diff" the network interface config and not
  tear things down. I am using a 'switchport' match in my netplan
  config, I wonder if this is related?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1783272/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1752411] Re: bind9-host, avahi-daemon-check-dns.sh hang forever causes network connections to get stuck

2018-07-31 Thread Trent Lloyd
Hoping to get attention to this again. Since 18.04.1 is out now, more
and more users are likely to hit this issue as more users will be
upgrading. This issue applies equally to desktop and server scenarios.

I would like to get lp1752411-avahi-host-timeout.diff sponsored for
upload please

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1752411

Title:
  bind9-host, avahi-daemon-check-dns.sh hang forever causes network
  connections to get stuck

Status in avahi package in Ubuntu:
  Confirmed
Status in bind9 package in Ubuntu:
  Confirmed
Status in openconnect package in Ubuntu:
  Invalid
Status in avahi package in Debian:
  New

Bug description:
  On 18.04 Openconnect connects successfully to any of multiple VPN
  concentrators but network traffic does not flow across the VPN tunnel
  connection. When testing on 16.04 this works flawlessly. This also
  worked on this system when it was on 17.10.

  I have tried reducing the mtu of the tun0 network device but this has
  not resulted in me being able to successfully ping the IP address.

  Example showing ping attempt to the IP of DNS server:

  ~$ cat /etc/resolv.conf 
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.

  nameserver 172.29.88.11
  nameserver 127.0.0.53

  liam@liam-lat:~$ netstat -nr
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0 0  0 
wlp2s0
  105.27.198.106  192.168.1.1 255.255.255.255 UGH   0 0  0 
wlp2s0
  169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.17.0.0  0.0.0.0 255.255.0.0 U 0 0  0 
docker0
  172.29.0.0  0.0.0.0 255.255.0.0 U 0 0  0 tun0
  172.29.88.110.0.0.0 255.255.255.255 UH0 0  0 tun0
  192.168.1.0 0.0.0.0 255.255.255.0   U 0 0  0 
wlp2s0
  liam@liam-lat:~$ ping 172.29.88.11
  PING 172.29.88.11 (172.29.88.11) 56(84) bytes of data.
  ^C
  --- 172.29.88.11 ping statistics ---
  4 packets transmitted, 0 received, 100% packet loss, time 3054ms

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: openconnect 7.08-3
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 22:11:33 2018
  InstallationDate: Installed on 2017-06-15 (258 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  SourcePackage: openconnect
  UpgradeStatus: Upgraded to bionic on 2018-02-22 (6 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1752411/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1799265] Re: avahi-daemon high cpu, unusable networking

2018-10-22 Thread Trent Lloyd
Instructions fixed to work with the 80 column limit of comments:

echo "deb http://ddebs.ubuntu.com $(lsb_release -cs) main restricted universe 
multiverse" | \
 sudo tee -a /etc/apt/sources.list.d/ddebs.list
sudo apt update
sudo apt install linux-tools-generic ubuntu-dbgsym-keyring \
 linux-cloud-tools-generic
sudo apt update
sudo apt install avahi-daemon-dbgsym libavahi-common3-dbgsym \
libavahi-core7-dbgsym libavahi-glib1-dbgsym libc6-dbgsym libcap2-dbgsym \
libdaemon0-dbgsym libdbus-1-3-dbgsym libecore-avahi1-dbgsym libexpat1-dbgsym \
libgcrypt20-dbgsym libgpg-error0-dbgsym liblz4-1-dbgsym liblzma5-dbgsym \
libnss-systemd-dbgsym libsystemd0-dbgsym

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1799265

Title:
  avahi-daemon high cpu, unusable networking

Status in avahi package in Ubuntu:
  New

Bug description:
  Currently running Kubuntu 18.10, Dell XPS 13 9350

  Since updating from Kubuntu 18.04 to 18.10, the avahi-daemon has been
  consistently hampering network performance and using CPU for long
  periods of time.

  When booting machine from off state, avahi-daemon uses an entire CPU
  at max load for approx 10 minutes. During this time, internet
  connectivity via wifi is essentially unusable. The wifi connection is
  good, but it seems that http transactions are cutoff mid-way so no
  webpage is able to load.

  When waking from sleep, the avahi-daemon causes similar symptoms, but
  with less than 1 full cpu usage, and with somewhat less degraded
  network performance, but still quite unusable.

  I have never had issues with avahi prior to the 18.10 upgrade, so I am
  fairly confident the issue is rooted in that change.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.10
  Package: avahi-daemon 0.7-4ubuntu2
  ProcVersionSignature: Ubuntu 4.18.0-10.11-generic 4.18.12
  Uname: Linux 4.18.0-10-generic x86_64
  ApportVersion: 2.20.10-0ubuntu13
  Architecture: amd64
  CurrentDesktop: KDE
  Date: Mon Oct 22 10:00:34 2018
  InstallationDate: Installed on 2017-07-24 (455 days ago)
  InstallationMedia: Ubuntu 17.04 "Zesty Zapus" - Release amd64 (20170412)
  ProcEnviron:
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   LD_PRELOAD=
   SHELL=/bin/bash
  SourcePackage: avahi
  UpgradeStatus: Upgraded to cosmic on 2018-10-20 (2 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1799265/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1799265] Re: avahi-daemon high cpu, unusable networking

2018-10-22 Thread Trent Lloyd
Hi Matt,

Thanks for the report. I'd like to profile avahi using perf to get
information on what functions are being executed. Could you run the
following commands to generate such data?

If you are unsure about any of this feel free to ask.


echo "deb http://ddebs.ubuntu.com $(lsb_release -cs) main restricted universe 
multiverse" | sudo tee -a /etc/apt/sources.list.d/ddebs.list
sudo apt update
sudo apt install linux-tools-generic ubuntu-dbgsym-keyring 
linux-cloud-tools-generic
sudo apt update
sudo apt install avahi-daemon-dbgsym libavahi-common3-dbgsym 
libavahi-core7-dbgsym libavahi-glib1-dbgsym libc6-dbgsym libcap2-dbgsym 
libdaemon0-dbgsym libdbus-1-3-dbgsym libecore-avahi1-dbgsym libexpat1-dbgsym 
libgcrypt20-dbgsym libgpg-error0-dbgsym liblz4-1-dbgsym liblzma5-dbgsym 
libnss-systemd-dbgsym libsystemd0-dbgsym 

then to record a profile:

sudo perf record -p $(cat /run/avahi-daemon/pid) -g -- sleep 60

This will exit after 60 seconds, then generate perf script output:

sudo perf script > avahi-perf.script.txt
sudo perf report -n --stdio > avahi-perf.report.txt

And then upload the resulting avahi-perf.script.txt and avahi-
perf.report.txt for analysis.


You'll want to make sure avahi is using 100%+ CPU at the time you do this.

Lastly from the same bootup, can you please collect the log info from 
journalctl:
journalctl -u avahi-daemon --no-pager --no-tail > avahi-journal.txt


Thanks for reporting the bug! Hopefully we can figure it out.

In the mean time if you want to disable avahi you can try this:
sudo systemctl disable avahi-daemon.socket avahi-daemon
sudo systemctl stop avahi-daemon.socket avahi-daemon

(to re-enable change to start and enable)

Regards,
Trent

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1799265

Title:
  avahi-daemon high cpu, unusable networking

Status in avahi package in Ubuntu:
  New

Bug description:
  Currently running Kubuntu 18.10, Dell XPS 13 9350

  Since updating from Kubuntu 18.04 to 18.10, the avahi-daemon has been
  consistently hampering network performance and using CPU for long
  periods of time.

  When booting machine from off state, avahi-daemon uses an entire CPU
  at max load for approx 10 minutes. During this time, internet
  connectivity via wifi is essentially unusable. The wifi connection is
  good, but it seems that http transactions are cutoff mid-way so no
  webpage is able to load.

  When waking from sleep, the avahi-daemon causes similar symptoms, but
  with less than 1 full cpu usage, and with somewhat less degraded
  network performance, but still quite unusable.

  I have never had issues with avahi prior to the 18.10 upgrade, so I am
  fairly confident the issue is rooted in that change.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.10
  Package: avahi-daemon 0.7-4ubuntu2
  ProcVersionSignature: Ubuntu 4.18.0-10.11-generic 4.18.12
  Uname: Linux 4.18.0-10-generic x86_64
  ApportVersion: 2.20.10-0ubuntu13
  Architecture: amd64
  CurrentDesktop: KDE
  Date: Mon Oct 22 10:00:34 2018
  InstallationDate: Installed on 2017-07-24 (455 days ago)
  InstallationMedia: Ubuntu 17.04 "Zesty Zapus" - Release amd64 (20170412)
  ProcEnviron:
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   LD_PRELOAD=
   SHELL=/bin/bash
  SourcePackage: avahi
  UpgradeStatus: Upgraded to cosmic on 2018-10-20 (2 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1799265/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1698248] Re: avahi-daemon constantly registers/withdraws IPv6 address record

2017-06-15 Thread Trent Lloyd
This is known upstream but not yet solved, working on it I have a theory

** Changed in: avahi (Ubuntu)
   Status: New => Confirmed

** Bug watch added: github.com/lathiat/avahi/issues #41
   https://github.com/lathiat/avahi/issues/41

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1698248

Title:
  avahi-daemon constantly registers/withdraws IPv6 address record

Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  Running Ubuntu 17.04, have avahi-daemon 0.6.32-1ubuntu1 installed.

  If I check syslog, every 5 - 10 seconds there is a set of messages
  like message like:

  Jun 15 20:53:01 dan-pc avahi-daemon[6673]: Registering new address record for 
2001:1970:5ea2:c101:a899:2dfa:5a35:61fd on enp0s31f6.*.
  Jun 15 20:53:01 dan-pc avahi-daemon[6673]: Registering new address record for 
2001:1970:5ea2:c101:64ee:d405:6df8:900a on enp0s31f6.*.
  Jun 15 20:53:01 dan-pc avahi-daemon[6673]: Withdrawing address record for 
2001:1970:5ea2:c101:64ee:d405:6df8:900a on enp0s31f6.
  Jun 15 20:53:01 dan-pc avahi-daemon[6673]: Withdrawing address record for 
2001:1970:5ea2:c101:a899:2dfa:5a35:61fd on enp0s31f6.

  etc..., which is clearly not normal. (This was also tested on a brand
  new installation of Ubuntu 17.04, same issue). Attached is a small
  sample of my syslog file.

  This also seems to impact other programs that use an ipv6 connection.
  For example, Google Chrome - half the time I try to load a webpage I
  get "your connection was interrupted; a network change was detected".
  After about a half-second, it proceeds to load the webpage, although
  often doesn't load it properly. This does not affect all websites
  (likely only those that support ipv6).

  See https://askubuntu.com/questions/905866/new-ubuntu-17-04-problem-
  your-connection-was-interrupted for other people experiencing the same
  bug, both in avahi and Chrome.

  Disabling ipv6 seems to work as a temporary workaround.

  I am also not sure whether this bug is caused by avahi, or whether it
  is just experiencing the most "symptoms".

  ProblemType: Bug
  DistroRelease: Ubuntu 17.04
  Package: avahi-daemon 0.6.32-1ubuntu1
  ProcVersionSignature: Ubuntu 4.10.0-22.24-generic 4.10.15
  Uname: Linux 4.10.0-22-generic x86_64
  NonfreeKernelModules: nvidia_uvm nvidia_drm nvidia_modeset nvidia
  ApportVersion: 2.20.4-0ubuntu4.1
  Architecture: amd64
  CurrentDesktop: X-Cinnamon
  Date: Thu Jun 15 20:42:19 2017
  InstallationDate: Installed on 2017-06-10 (5 days ago)
  InstallationMedia: Ubuntu 17.04 "Zesty Zapus" - Release amd64 (20170412)
  SourcePackage: avahi
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1698248/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1698248] Re: avahi-daemon constantly registers/withdraws IPv6 address record

2017-06-15 Thread Trent Lloyd
https://github.com/lathiat/avahi/issues/41

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1698248

Title:
  avahi-daemon constantly registers/withdraws IPv6 address record

Status in avahi package in Ubuntu:
  Confirmed

Bug description:
  Running Ubuntu 17.04, have avahi-daemon 0.6.32-1ubuntu1 installed.

  If I check syslog, every 5 - 10 seconds there is a set of messages
  like message like:

  Jun 15 20:53:01 dan-pc avahi-daemon[6673]: Registering new address record for 
2001:1970:5ea2:c101:a899:2dfa:5a35:61fd on enp0s31f6.*.
  Jun 15 20:53:01 dan-pc avahi-daemon[6673]: Registering new address record for 
2001:1970:5ea2:c101:64ee:d405:6df8:900a on enp0s31f6.*.
  Jun 15 20:53:01 dan-pc avahi-daemon[6673]: Withdrawing address record for 
2001:1970:5ea2:c101:64ee:d405:6df8:900a on enp0s31f6.
  Jun 15 20:53:01 dan-pc avahi-daemon[6673]: Withdrawing address record for 
2001:1970:5ea2:c101:a899:2dfa:5a35:61fd on enp0s31f6.

  etc..., which is clearly not normal. (This was also tested on a brand
  new installation of Ubuntu 17.04, same issue). Attached is a small
  sample of my syslog file.

  This also seems to impact other programs that use an ipv6 connection.
  For example, Google Chrome - half the time I try to load a webpage I
  get "your connection was interrupted; a network change was detected".
  After about a half-second, it proceeds to load the webpage, although
  often doesn't load it properly. This does not affect all websites
  (likely only those that support ipv6).

  See https://askubuntu.com/questions/905866/new-ubuntu-17-04-problem-
  your-connection-was-interrupted for other people experiencing the same
  bug, both in avahi and Chrome.

  Disabling ipv6 seems to work as a temporary workaround.

  I am also not sure whether this bug is caused by avahi, or whether it
  is just experiencing the most "symptoms".

  ProblemType: Bug
  DistroRelease: Ubuntu 17.04
  Package: avahi-daemon 0.6.32-1ubuntu1
  ProcVersionSignature: Ubuntu 4.10.0-22.24-generic 4.10.15
  Uname: Linux 4.10.0-22-generic x86_64
  NonfreeKernelModules: nvidia_uvm nvidia_drm nvidia_modeset nvidia
  ApportVersion: 2.20.4-0ubuntu4.1
  Architecture: amd64
  CurrentDesktop: X-Cinnamon
  Date: Thu Jun 15 20:42:19 2017
  InstallationDate: Installed on 2017-06-10 (5 days ago)
  InstallationMedia: Ubuntu 17.04 "Zesty Zapus" - Release amd64 (20170412)
  SourcePackage: avahi
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1698248/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1679117] Re: avahi-daemon crashed with SIGABRT in avahi_malloc()

2017-04-04 Thread Trent Lloyd
*** This bug is a duplicate of bug 1668559 ***
https://bugs.launchpad.net/bugs/1668559

Commenting here as the duplicate parent is Private at the moment.

Main issue is here, hitting rlimit-data which is set by default in the config 
file to 4M
> kernel: mmap: avahi-daemon (992): VmData 4272128 exceed data ulimit 4194304. 
> Update limits or use boot option ignore_rlimit_data.

Not sure what is causing this to go high on a few machines, possibly
number of services on the network. Testing here it looks like avahi
needs generally about 2M when publishing 30+ services so this limit is
likely way too low given we're over 50% there in common usage.


Main thing to confirm here is that this is simply hitting the limit in normal 
usage, and not a bug causing runaway memory usage. If anyone would like to test 
it would be great to:

 (a) Confirm whether this happens constantly (does avahi always crash) or did 
it happen once-off and not happen again. If restart now does it crash again? 
systemctl restart avahi-daemon
 (b) Modify /etc/avahi/avahi-daemon.conf and change "rlimit-data=4194304" to 
"rlimit-data=16777216"
 (c) restart avahi-daemon again: systemctl restart avahi-daemon
 (c) report back if the crash occured both before, and after the change

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1679117

Title:
  avahi-daemon crashed with SIGABRT in avahi_malloc()

Status in avahi package in Ubuntu:
  New

Bug description:
  The problem started with Ubuntu 16.10 and is still here after the upgrade to 
17.04.
  A warning message displays several times a day.
  I click submit or cancel and nothing seems to happen.
  No other problems but the message. Everything seems to work properly.

  ProblemType: Crash
  DistroRelease: Ubuntu 17.04
  Package: avahi-daemon 0.6.32-1ubuntu1
  ProcVersionSignature: Ubuntu 4.10.0-14.16-generic 4.10.3
  Uname: Linux 4.10.0-14-generic x86_64
  NonfreeKernelModules: nvidia_uvm nvidia_drm nvidia_modeset nvidia
  ApportVersion: 2.20.4-0ubuntu2
  Architecture: amd64
  CrashCounter: 1
  Date: Mon Apr  3 12:26:42 2017
  ExecutablePath: /usr/sbin/avahi-daemon
  InstallationDate: Installed on 2014-04-14 (1084 days ago)
  InstallationMedia: Ubuntu 14.04 LTS "Trusty Tahr" - Daily amd64 (20140413)
  ProcCmdline: avahi-daemon:\ running\ [hostname.local]
  ProcEnviron:
   
  Signal: 6
  SourcePackage: avahi
  StacktraceTop:
   ?? () from /usr/lib/x86_64-linux-gnu/libavahi-common.so.3
   avahi_malloc () from /usr/lib/x86_64-linux-gnu/libavahi-common.so.3
   avahi_prio_queue_put () from /usr/lib/x86_64-linux-gnu/libavahi-core.so.7
   avahi_time_event_new () from /usr/lib/x86_64-linux-gnu/libavahi-core.so.7
   ?? () from /usr/lib/x86_64-linux-gnu/libavahi-core.so.7
  Title: avahi-daemon crashed with SIGABRT in avahi_malloc()
  UpgradeStatus: Upgraded to zesty on 2017-04-03 (0 days ago)
  UserGroups:

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/avahi/+bug/1679117/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1702056] Re: perf broken on 4.11.0-9-generic (artful): /usr/lib/linux-tools/4.11.0-9-generic/perf: error while loading shared libraries: libbfd-2.28-system.so: cannot open shared

2017-07-25 Thread Trent Lloyd
New binutils upload today of 2.29, which conflicts with linux-tools as
it turns out it Depends binutils (>= 2.28), binutils (<< 2.29)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to binutils in Ubuntu.
https://bugs.launchpad.net/bugs/1702056

Title:
  perf broken on 4.11.0-9-generic (artful): /usr/lib/linux-
  tools/4.11.0-9-generic/perf: error while loading shared libraries:
  libbfd-2.28-system.so: cannot open shared object file: No such file or
  directory

Status in binutils package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I am unable to launch perf on 4.11.0-9-generic (artful) running on
  artful-proposed

  /usr/lib/linux-tools/4.11.0-9-generic/perf: error while loading shared
  libraries: libbfd-2.28-system.so: cannot open shared object file: No
  such file or directory

  It seems this library is provided by binutils, however the symlink
  doesn't exist like it does on my xenial system for comparison.

  lathiat@ubuntu:~$ dpkg -L binutils|grep bfd|grep so
  /usr/lib/x86_64-linux-gnu/libbfd-2.28.51-system.20170627.so

  lathiat@ubuntu:~$ ls /usr/lib/x86_64-linux-gnu/*libbfd*
  /usr/lib/x86_64-linux-gnu/libbfd-2.28.51-powerpc.20170627.so  
/usr/lib/x86_64-linux-gnu/libbfd-2.28.51-system.20170627.so

  On xenial:
  root@hyper:~# ls /usr/lib/x86_64-linux-gnu/*libbfd*
  /usr/lib/x86_64-linux-gnu/libbfd-2.26.1-system.so  
/usr/lib/x86_64-linux-gnu/libbfd.a
  /usr/lib/x86_64-linux-gnu/libbfd-2.26-system.so
/usr/lib/x86_64-linux-gnu/libbfd.so

  
  Not clear to me where the bug lies here, so starting with the 'perf' binary 
by way of the kernel package.  But may be an issue with something else.  I did 
notice quickly running 'ldd' in /usr/bin that it seems other files are linking 
against the above so directly.
  --- 
  ApportVersion: 2.20.5-0ubuntu5
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC1:  lathiat4190 F pulseaudio
   /dev/snd/controlC3:  lathiat4190 F pulseaudio
   /dev/snd/controlC0:  lathiat4190 F pulseaudio
   /dev/snd/controlC2:  lathiat4190 F pulseaudio
  CurrentDesktop: GNOME
  DistroRelease: Ubuntu 17.10
  EcryptfsInUse: Yes
  MachineType: Gigabyte Technology Co., Ltd. Z97X-Gaming G1 WIFI-BK
  NonfreeKernelModules: zfs zunicode zavl zcommon znvpair
  Package: linux (not installed)
  ProcFB:
   0 radeondrmfb
   1 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/root@/boot/vmlinuz-4.11.0-9-generic 
root=ZFS=hostname/root ro
  ProcVersionSignature: Ubuntu 4.11.0-9.14-generic 4.11.7
  RelatedPackageVersions:
   linux-restricted-modules-4.11.0-9-generic N/A
   linux-backports-modules-4.11.0-9-generic  N/A
   linux-firmware1.167
  Tags:  artful
  Uname: Linux 4.11.0-9-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dialout dip libvirt lxd plugdev sudo wireshark
  _MarkForUpload: True
  dmi.bios.date: 04/21/2015
  dmi.bios.vendor: American Megatrends Inc.
  dmi.bios.version: F7
  dmi.board.asset.tag: To be filled by O.E.M.
  dmi.board.name: Z97X-Gaming G1 WIFI-BK
  dmi.board.vendor: Gigabyte Technology Co., Ltd.
  dmi.board.version: x.x
  dmi.chassis.asset.tag: To Be Filled By O.E.M.
  dmi.chassis.type: 3
  dmi.chassis.vendor: Gigabyte Technology Co., Ltd.
  dmi.chassis.version: To Be Filled By O.E.M.
  dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrF7:bd04/21/2015:svnGigabyteTechnologyCo.,Ltd.:pnZ97X-GamingG1WIFI-BK:pvrTobefilledbyO.E.M.:rvnGigabyteTechnologyCo.,Ltd.:rnZ97X-GamingG1WIFI-BK:rvrx.x:cvnGigabyteTechnologyCo.,Ltd.:ct3:cvrToBeFilledByO.E.M.:
  dmi.product.name: Z97X-Gaming G1 WIFI-BK
  dmi.product.version: To be filled by O.E.M.
  dmi.sys.vendor: Gigabyte Technology Co., Ltd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/binutils/+bug/1702056/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1702056] Re: perf broken on 4.11.0-9-generic (artful): /usr/lib/linux-tools/4.11.0-9-generic/perf: error while loading shared libraries: libbfd-2.28-system.so: cannot open shared

2017-07-31 Thread Trent Lloyd
Same problem with the new 4.11.0-12 upload, i'm assuming since binutils
is still in -proposed it wasn't built against?

 linux-tools-4.11.0-12 : Depends: binutils (< 2.29) but 2.29-2ubuntu1 is
to be installed

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to binutils in Ubuntu.
https://bugs.launchpad.net/bugs/1702056

Title:
  perf broken on 4.11.0-9-generic (artful): /usr/lib/linux-
  tools/4.11.0-9-generic/perf: error while loading shared libraries:
  libbfd-2.28-system.so: cannot open shared object file: No such file or
  directory

Status in binutils package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I am unable to launch perf on 4.11.0-9-generic (artful) running on
  artful-proposed

  /usr/lib/linux-tools/4.11.0-9-generic/perf: error while loading shared
  libraries: libbfd-2.28-system.so: cannot open shared object file: No
  such file or directory

  It seems this library is provided by binutils, however the symlink
  doesn't exist like it does on my xenial system for comparison.

  lathiat@ubuntu:~$ dpkg -L binutils|grep bfd|grep so
  /usr/lib/x86_64-linux-gnu/libbfd-2.28.51-system.20170627.so

  lathiat@ubuntu:~$ ls /usr/lib/x86_64-linux-gnu/*libbfd*
  /usr/lib/x86_64-linux-gnu/libbfd-2.28.51-powerpc.20170627.so  
/usr/lib/x86_64-linux-gnu/libbfd-2.28.51-system.20170627.so

  On xenial:
  root@hyper:~# ls /usr/lib/x86_64-linux-gnu/*libbfd*
  /usr/lib/x86_64-linux-gnu/libbfd-2.26.1-system.so  
/usr/lib/x86_64-linux-gnu/libbfd.a
  /usr/lib/x86_64-linux-gnu/libbfd-2.26-system.so
/usr/lib/x86_64-linux-gnu/libbfd.so

  
  Not clear to me where the bug lies here, so starting with the 'perf' binary 
by way of the kernel package.  But may be an issue with something else.  I did 
notice quickly running 'ldd' in /usr/bin that it seems other files are linking 
against the above so directly.
  --- 
  ApportVersion: 2.20.5-0ubuntu5
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC1:  lathiat4190 F pulseaudio
   /dev/snd/controlC3:  lathiat4190 F pulseaudio
   /dev/snd/controlC0:  lathiat4190 F pulseaudio
   /dev/snd/controlC2:  lathiat4190 F pulseaudio
  CurrentDesktop: GNOME
  DistroRelease: Ubuntu 17.10
  EcryptfsInUse: Yes
  MachineType: Gigabyte Technology Co., Ltd. Z97X-Gaming G1 WIFI-BK
  NonfreeKernelModules: zfs zunicode zavl zcommon znvpair
  Package: linux (not installed)
  ProcFB:
   0 radeondrmfb
   1 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/root@/boot/vmlinuz-4.11.0-9-generic 
root=ZFS=hostname/root ro
  ProcVersionSignature: Ubuntu 4.11.0-9.14-generic 4.11.7
  RelatedPackageVersions:
   linux-restricted-modules-4.11.0-9-generic N/A
   linux-backports-modules-4.11.0-9-generic  N/A
   linux-firmware1.167
  Tags:  artful
  Uname: Linux 4.11.0-9-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dialout dip libvirt lxd plugdev sudo wireshark
  _MarkForUpload: True
  dmi.bios.date: 04/21/2015
  dmi.bios.vendor: American Megatrends Inc.
  dmi.bios.version: F7
  dmi.board.asset.tag: To be filled by O.E.M.
  dmi.board.name: Z97X-Gaming G1 WIFI-BK
  dmi.board.vendor: Gigabyte Technology Co., Ltd.
  dmi.board.version: x.x
  dmi.chassis.asset.tag: To Be Filled By O.E.M.
  dmi.chassis.type: 3
  dmi.chassis.vendor: Gigabyte Technology Co., Ltd.
  dmi.chassis.version: To Be Filled By O.E.M.
  dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrF7:bd04/21/2015:svnGigabyteTechnologyCo.,Ltd.:pnZ97X-GamingG1WIFI-BK:pvrTobefilledbyO.E.M.:rvnGigabyteTechnologyCo.,Ltd.:rnZ97X-GamingG1WIFI-BK:rvrx.x:cvnGigabyteTechnologyCo.,Ltd.:ct3:cvrToBeFilledByO.E.M.:
  dmi.product.name: Z97X-Gaming G1 WIFI-BK
  dmi.product.version: To be filled by O.E.M.
  dmi.sys.vendor: Gigabyte Technology Co., Ltd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/binutils/+bug/1702056/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1702056] Re: perf broken on 4.11.0-9-generic (artful): /usr/lib/linux-tools/4.11.0-9-generic/perf: error while loading shared libraries: libbfd-2.28-system.so: cannot open shared

2017-08-21 Thread Trent Lloyd
This is resolved now.

** Changed in: linux (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to binutils in Ubuntu.
https://bugs.launchpad.net/bugs/1702056

Title:
  perf broken on 4.11.0-9-generic (artful): /usr/lib/linux-
  tools/4.11.0-9-generic/perf: error while loading shared libraries:
  libbfd-2.28-system.so: cannot open shared object file: No such file or
  directory

Status in binutils package in Ubuntu:
  Invalid
Status in linux package in Ubuntu:
  Fix Released

Bug description:
  I am unable to launch perf on 4.11.0-9-generic (artful) running on
  artful-proposed

  /usr/lib/linux-tools/4.11.0-9-generic/perf: error while loading shared
  libraries: libbfd-2.28-system.so: cannot open shared object file: No
  such file or directory

  It seems this library is provided by binutils, however the symlink
  doesn't exist like it does on my xenial system for comparison.

  lathiat@ubuntu:~$ dpkg -L binutils|grep bfd|grep so
  /usr/lib/x86_64-linux-gnu/libbfd-2.28.51-system.20170627.so

  lathiat@ubuntu:~$ ls /usr/lib/x86_64-linux-gnu/*libbfd*
  /usr/lib/x86_64-linux-gnu/libbfd-2.28.51-powerpc.20170627.so  
/usr/lib/x86_64-linux-gnu/libbfd-2.28.51-system.20170627.so

  On xenial:
  root@hyper:~# ls /usr/lib/x86_64-linux-gnu/*libbfd*
  /usr/lib/x86_64-linux-gnu/libbfd-2.26.1-system.so  
/usr/lib/x86_64-linux-gnu/libbfd.a
  /usr/lib/x86_64-linux-gnu/libbfd-2.26-system.so
/usr/lib/x86_64-linux-gnu/libbfd.so

  
  Not clear to me where the bug lies here, so starting with the 'perf' binary 
by way of the kernel package.  But may be an issue with something else.  I did 
notice quickly running 'ldd' in /usr/bin that it seems other files are linking 
against the above so directly.
  --- 
  ApportVersion: 2.20.5-0ubuntu5
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC1:  lathiat4190 F pulseaudio
   /dev/snd/controlC3:  lathiat4190 F pulseaudio
   /dev/snd/controlC0:  lathiat4190 F pulseaudio
   /dev/snd/controlC2:  lathiat4190 F pulseaudio
  CurrentDesktop: GNOME
  DistroRelease: Ubuntu 17.10
  EcryptfsInUse: Yes
  MachineType: Gigabyte Technology Co., Ltd. Z97X-Gaming G1 WIFI-BK
  NonfreeKernelModules: zfs zunicode zavl zcommon znvpair
  Package: linux (not installed)
  ProcFB:
   0 radeondrmfb
   1 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/root@/boot/vmlinuz-4.11.0-9-generic 
root=ZFS=hostname/root ro
  ProcVersionSignature: Ubuntu 4.11.0-9.14-generic 4.11.7
  RelatedPackageVersions:
   linux-restricted-modules-4.11.0-9-generic N/A
   linux-backports-modules-4.11.0-9-generic  N/A
   linux-firmware1.167
  Tags:  artful
  Uname: Linux 4.11.0-9-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dialout dip libvirt lxd plugdev sudo wireshark
  _MarkForUpload: True
  dmi.bios.date: 04/21/2015
  dmi.bios.vendor: American Megatrends Inc.
  dmi.bios.version: F7
  dmi.board.asset.tag: To be filled by O.E.M.
  dmi.board.name: Z97X-Gaming G1 WIFI-BK
  dmi.board.vendor: Gigabyte Technology Co., Ltd.
  dmi.board.version: x.x
  dmi.chassis.asset.tag: To Be Filled By O.E.M.
  dmi.chassis.type: 3
  dmi.chassis.vendor: Gigabyte Technology Co., Ltd.
  dmi.chassis.version: To Be Filled By O.E.M.
  dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrF7:bd04/21/2015:svnGigabyteTechnologyCo.,Ltd.:pnZ97X-GamingG1WIFI-BK:pvrTobefilledbyO.E.M.:rvnGigabyteTechnologyCo.,Ltd.:rnZ97X-GamingG1WIFI-BK:rvrx.x:cvnGigabyteTechnologyCo.,Ltd.:ct3:cvrToBeFilledByO.E.M.:
  dmi.product.name: Z97X-Gaming G1 WIFI-BK
  dmi.product.version: To be filled by O.E.M.
  dmi.sys.vendor: Gigabyte Technology Co., Ltd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/binutils/+bug/1702056/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2017-02-04 Thread Trent Lloyd
Avahi starts fine in a 16.04 container for me.  Can you share what
errors you are actually seeing Dustin?

lxc launch ubuntu:16.04 xenial
ssh ubuntu@
sudo apt install avahi-daemon
sudo systemctl status avahi-daemon


The post you linked is from January 2016 and on 15.10 (wily).. it does in fact 
not launch correctly on wily but it does fine on xenial.

On wily, setting rlimit-nproc=4 seems to fix it, for some reason rlimit-
nproc=3 fails on wily though the same setting is working on xenial.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  New
Status in avahi package in Ubuntu:
  New
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  The bug, and workaround, are clearly described in this mailing list
  thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2017-02-04 Thread Trent Lloyd
Oh right, I see now.. too early to comment as usual :(

The problem is that you are setting up a "privileged" container for MAAS
which does not use UID mapping, hence the issue shows up in the MAAS
workflow but not with a normal container deployment.


The rlimit-nproc is simply set in /etc/avahi/avahi-daemon.conf, so can easily 
be tweaked in the package.  I believe the idea behind it originally is 
basically to ensure that avahi cannot be used to execute something else, 
despite all the chrooting, etc - even if there was a way.  Essentially blocking 
further forking.  For that reason, probably makes most sense to simply remove 
the limit rather than increase it by any given number.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  New
Status in avahi package in Ubuntu:
  New
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  The bug, and workaround, are clearly described in this mailing list
  thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2017-02-05 Thread Trent Lloyd
There was previously a patch to skip setting this (because it would
fail), it was removed for a couple of reasons including an upstream
change not to abort of setting RLIMIT_NPROC failed:

I've committed a change upstream to simply remove the default setting of this 
option, and will prepare a debdiff to patch this change into Xenial:
https://github.com/lathiat/avahi/commit/537371c786479f44882ece3d905a0e5ccda4f0a2

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  New
Status in avahi package in Ubuntu:
  New
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  The bug, and workaround, are clearly described in this mailing list
  thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1661869] Re: maas install fails inside of a 16.04 lxd container due to avahi problems

2017-02-07 Thread Trent Lloyd
** Changed in: avahi (Ubuntu)
   Status: New => Confirmed

** Changed in: avahi (Ubuntu)
   Importance: Undecided => High

** Changed in: avahi (Ubuntu)
 Assignee: (unassigned) => Trent Lloyd (lathiat)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.
https://bugs.launchpad.net/bugs/1661869

Title:
  maas install fails inside of a 16.04 lxd container due to avahi
  problems

Status in MAAS:
  Invalid
Status in avahi package in Ubuntu:
  Confirmed
Status in lxd package in Ubuntu:
  Invalid

Bug description:
  The bug, and workaround, are clearly described in this mailing list
  thread:

  https://lists.linuxcontainers.org/pipermail/lxc-
  users/2016-January/010791.html

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1661869/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


  1   2   >