security/py-fail2ban quits working after some hours

2022-10-10 Thread Michael Grimm
[cc's to maintainer]

Hi,

this is a recent stable/13-n252672-2bd3dbe3dd6 running py39-fail2ban-1.0.1_2 
and python39-3.9.14

I have been running fail2ban for years now, but immediately after upgrading 
py39-fail2ban fron 0.11.2 to 1.0.1 the fail2ban-server will end up as a runaway 
process consuming all CPU time. This happens between 4 to 24 hours after 
initial fail2ban-server startup.

I have recompiled world, kernel and all ports, but I to no avail. I am able to 
reproduce this behaviour on two different host running the same OS et al.


After becoming a runaway process:
 
root> /usr/local/etc/rc.d/fail2ban status
fail2ban is running as pid 26487.

root> ps Af | grep fail2ban
26487  -  S545:40.61 /usr/local/bin/python3.9 
/usr/local/bin/fail2ban-server --async -b -s /var/run/fail2ban/fail2ban.sock -p 
/var/run/fail2ban/fail2ban.pid --loglevel INFO --logtarget SYSLOG 
--syslogsocket auto

root> /usr/local/etc/rc.d/fail2ban stop
^C
2022-10-08 09:29:45,451 fail2ban[1447]: WARNING Caught 
signal 2. Exiting

root> kill -9 26487

root> /usr/local/etc/rc.d/fail2ban start
2022-10-08 09:30:30,776 fail2ban[1609]: ERROR   
Fail2ban seems to be in unexpected state (not running but the socket exists)

root> la /var/run/fail2ban/*
-rw---  1 root  wheel  uarch 6 Oct  7 21:26 
/var/run/fail2ban/fail2ban.pid
srwx--  1 root  wheel  uarch 0 Oct  7 21:26 
/var/run/fail2ban/fail2ban.sock

root> rm /var/run/fail2ban/*

root> /usr/local/etc/rc.d/fail2ban start
Server ready


So, whenever the server becomes a runaway process, it can only restarted by 
killing it hard, and after removing both pid and sock files.

Has anyone else run into this issue, or am I the only one so far? Couldn't find 
anything according this issue in the bugtrackers and on Google.




BTW: One glitch in fail2ban.conf file:

# Option: allowipv6
# Notes.: Allows IPv6 interface:
# Default: auto
# Values: [ auto yes (on, true, 1) no (off, false, 0) ] Default: auto
#allowipv6 = auto

This will result in a warning at start time:

2022-10-08 09:30:51,520 fail2ban.configreader   [1633]: WARNING 
'allowipv6' not defined in 'Definition'. Using default one: 'auto'

After activating this entry to "allowipv6 = auto" those warnings disappear.

Regards,
Michael




Resigning from ports maintainership

2022-10-10 Thread Davíð Steinn Geirsson
Hello,

I must inform you that as I am no longer working at ISNIC, I am hereby
resigning as ports maintainer of all ports that I was previously handling.
These ports had maintainer set as da...@isnic.is, but as I no longer have
access to this email I am sending this from my personal email address.

Please CC me on all replies.

Thank you all,
Davíð



Re: Resigning from ports maintainership

2022-10-10 Thread Fernando Apesteguía
On Mon, Oct 10, 2022 at 5:45 PM Davíð Steinn Geirsson  wrote:
>
> Hello,
>
> I must inform you that as I am no longer working at ISNIC, I am hereby
> resigning as ports maintainer of all ports that I was previously handling.
> These ports had maintainer set as da...@isnic.is, but as I no longer have
> access to this email I am sending this from my personal email address.
>
> Please CC me on all replies.

Done.

Thank you for your contributions.

Cheers!

>
> Thank you all,
> Davíð
>



Many poudriere builds get stuck in lib-depends and run-depends phases and never succeed

2022-10-10 Thread Yuri
Every once in a while when many builds happen to be all in lib-depends 
and run-depends phases in poudriere - they get stuck and later all fail 
together.


All failures look like in the log below. It's looks like many 'pkg 
install llvmNN' processes for different builds compete with each other 
and don't progress.



Does anybody else have such problem? What could be a solution?



Thanks,

Yuri





---begin log---

===>   qcsxcad-0.6.2.9_2 depends on shared library: libQt5Gui.so - not found
===>   Installing existing package /packages/All/qt5-gui-5.15.5p165.pkg
[13amd64-local-workstation-job-03] Installing qt5-gui-5.15.5p165...
[13amd64-local-workstation-job-03] `-- Installing mesa-dri-21.3.8...
[13amd64-local-workstation-job-03] |   `-- Installing libXv-1.0.11_2,1...
[13amd64-local-workstation-job-03] |   `-- Extracting libXv-1.0.11_2,1: 
.. done

[13amd64-local-workstation-job-03] |   `-- Installing libXvMC-1.0.12...
[13amd64-local-workstation-job-03] |   `-- Extracting libXvMC-1.0.12: 
.. done
[13amd64-local-workstation-job-03] |   `-- Installing 
libunwind-20211201_1...
[13amd64-local-workstation-job-03] |   `-- Extracting 
libunwind-20211201_1: .. done

[13amd64-local-workstation-job-03] |   `-- Installing libxshmfence-1.3_1...
[13amd64-local-workstation-job-03] |   `-- Extracting 
libxshmfence-1.3_1: . done

[13amd64-local-workstation-job-03] |   `-- Installing llvm13-13.0.1_3...
[13amd64-local-workstation-job-03] |   | `-- Installing lua53-5.3.6...
[13amd64-local-workstation-job-03] |   | `-- Extracting lua53-5.3.6: 
.. done

[13amd64-local-workstation-job-03] |   `-- Extracting llvm13-13.0.1_3: .
Failed to install the following 1 package(s): 
/packages/All/qt5-gui-5.15.5p165.pkg

*** Error code 1

---end log---




Re: security/py-fail2ban quits working after some hours

2022-10-10 Thread Cy Schubert
In message <6ef1b25d-3121-4fa1-bf47-dce1ffd64...@ellael.org>, Michael Grimm 
wri
tes:
> [cc's to maintainer]
>
> Hi,
>
> this is a recent stable/13-n252672-2bd3dbe3dd6 running =
> py39-fail2ban-1.0.1_2 and python39-3.9.14
>
> I have been running fail2ban for years now, but immediately after =
> upgrading py39-fail2ban fron 0.11.2 to 1.0.1 the fail2ban-server will =
> end up as a runaway process consuming all CPU time. This happens between =
> 4 to 24 hours after initial fail2ban-server startup.
>
> I have recompiled world, kernel and all ports, but I to no avail. I am =
> able to reproduce this behaviour on two different host running the same =
> OS et al.
>
>
> After becoming a runaway process:
> =20
>   root> /usr/local/etc/rc.d/fail2ban status
>   fail2ban is running as pid 26487.
>
>   root> ps Af | grep fail2ban
>   26487  -  S545:40.61 /usr/local/bin/python3.9 =
> /usr/local/bin/fail2ban-server --async -b -s =
> /var/run/fail2ban/fail2ban.sock -p /var/run/fail2ban/fail2ban.pid =
> --loglevel INFO --logtarget SYSLOG --syslogsocket auto

The only difference between mine is --logtarget is a file.

>
>   root> /usr/local/etc/rc.d/fail2ban stop
>   ^C
>   2022-10-08 09:29:45,451 fail2ban[1447]: WARNING =
> Caught signal 2. Exiting
>
>   root> kill -9 26487
>
>   root> /usr/local/etc/rc.d/fail2ban start
>   2022-10-08 09:30:30,776 fail2ban[1609]: ERROR   =
> Fail2ban seems to be in unexpected state (not running but the socket =
> exists)
>
>   root> la /var/run/fail2ban/*
>   -rw---  1 root  wheel  uarch 6 Oct  7 21:26 =
> /var/run/fail2ban/fail2ban.pid
>   srwx--  1 root  wheel  uarch 0 Oct  7 21:26 =
> /var/run/fail2ban/fail2ban.sock
>
>   root> rm /var/run/fail2ban/*
>
>   root> /usr/local/etc/rc.d/fail2ban start
>   Server ready
>
>
> So, whenever the server becomes a runaway process, it can only restarted =
> by killing it hard, and after removing both pid and sock files.

This isn't enough information to diagnose the problem. See below.

>
> Has anyone else run into this issue, or am I the only one so far? =
> Couldn't find anything according this issue in the bugtrackers and on =
> Google.

I've been running this version for over a week without issue.

>
>
>
>
> BTW: One glitch in fail2ban.conf file:
>
>   # Option: allowipv6
>   # Notes.: Allows IPv6 interface:
>   # Default: auto
>   # Values: [ auto yes (on, true, 1) no (off, false, 0) ] Default: =
> auto
>   #allowipv6 =3D auto

This won't cause looping.

>
> This will result in a warning at start time:
>
>   2022-10-08 09:30:51,520 fail2ban.configreader   [1633]: WARNING =
> 'allowipv6' not defined in 'Definition'. Using default one: 'auto'
>
> After activating this entry to "allowipv6 =3D auto" those warnings =
> disappear.

Can you answer a few questions, please?

1. What does uname -a say?

2. Was fail2ban built from ports or did you pkg upgrade?

3. What other ports/packages are installed?

4. Which filters are you using? Have you modified any? Or have you written 
your own?

5. Which actions are you using? Have you modified them? Or have you written 
your own?

6. When fail2ban loops, instead of simply killing it, run truss. You can do 
this by:

truss -faeD -o fail2ban.truss -p THE_TRUSS_PID


-- 
Cheers,
Cy Schubert 
FreeBSD UNIX: Web:  http://www.FreeBSD.org
NTP:   Web:  https://nwtime.org

e^(i*pi)+1=0





RE: Many poudriere builds get stuck in lib-depends and run-depends phases and never succeed

2022-10-10 Thread Mark Millard
Yuri  wrote on
Date: Mon, 10 Oct 2022 17:56:33 UTC :

> Every once in a while when many builds happen to be all in lib-depends 
> and run-depends phases in poudriere - they get stuck and later all fail 
> together.
> 
> All failures look like in the log below. It's looks like many 'pkg 
> install llvmNN' processes for different builds compete with each other 
> and don't progress.
> 
> 
> Does anybody else have such problem? What could be a solution?
> 
> 
> 
> Thanks,
> 
> Yuri
> 
> 
> 
> 
> 
> ---begin log---
> 
> ===>   qcsxcad-0.6.2.9_2 depends on shared library: libQt5Gui.so - not found
> ===>   Installing existing package /packages/All/qt5-gui-5.15.5p165.pkg
> [13amd64-local-workstation-job-03] Installing qt5-gui-5.15.5p165...
> [13amd64-local-workstation-job-03] `-- Installing mesa-dri-21.3.8...
> [13amd64-local-workstation-job-03] |   `-- Installing libXv-1.0.11_2,1...
> [13amd64-local-workstation-job-03] |   `-- Extracting libXv-1.0.11_2,1: 
> .. done
> [13amd64-local-workstation-job-03] |   `-- Installing libXvMC-1.0.12...
> [13amd64-local-workstation-job-03] |   `-- Extracting libXvMC-1.0.12: 
> .. done
> [13amd64-local-workstation-job-03] |   `-- Installing 
> libunwind-20211201_1...
> [13amd64-local-workstation-job-03] |   `-- Extracting 
> libunwind-20211201_1: .. done
> [13amd64-local-workstation-job-03] |   `-- Installing libxshmfence-1.3_1...
> [13amd64-local-workstation-job-03] |   `-- Extracting 
> libxshmfence-1.3_1: . done
> [13amd64-local-workstation-job-03] |   `-- Installing llvm13-13.0.1_3...
> [13amd64-local-workstation-job-03] |   | `-- Installing lua53-5.3.6...
> [13amd64-local-workstation-job-03] |   | `-- Extracting lua53-5.3.6: 
> .. done
> [13amd64-local-workstation-job-03] |   `-- Extracting llvm13-13.0.1_3: .
> Failed to install the following 1 package(s): 
> /packages/All/qt5-gui-5.15.5p165.pkg
> *** Error code 1
> 
> ---end log---

Any interesting console messages? /var/log/messages content?
dmesg -a content?

Can you use poudriere bulk -w reasonably in your context for
reproducing the problem? The captured /wrkdir/ trees might
hold clues, especially if you have cores set to be stored in
the local directory instead of in /tmp (avoiding places -w
does not record).


Note: I've similarly captured clang++ failure data via having
it store the files locally (.) instead of in /tmp . There is a
command line option for clang/clang++ for controlling where
the files are put. That can make bulk -w capture the files.

===
Mark Millard
marklmi at yahoo.com




Re: security/py-fail2ban quits working after some hours

2022-10-10 Thread Roger Marquis

Cy Schubert wrote:

Michael Grimm writes:

this is a recent stable/13-n252672-2bd3dbe3dd6 running =
py39-fail2ban-1.0.1_2 and python39-3.9.14
I have been running fail2ban for years now, but immediately after =
upgrading py39-fail2ban fron 0.11.2 to 1.0.1 the fail2ban-server will =
end up as a runaway process consuming all CPU time. This happens between =
4 to 24 hours after initial fail2ban-server startup.


Am running fail2ban-1.0.1_2 and python38-3.8.14 did have a similar
startup issue.  Could not use the 'service' command and had to restort
to 'kill -9' to stop.  Fix for that was to delete /var/{run,db}/fail2ban/*
and restart.

Still seeing relatively high CPU utilization compared to the previous
version though it rotates cores quickly.

PID USERNAME THR PRI NICE SIZE RES STATE C  TIMEWCPU COMMAND
  67125 root  17  200  74M 12M uwait 8 23.7H 102.94% python3.8

Voluntary Context SWitches seem high compared to other processes though
have no previous benchmark to compare.

PID USERNAME VCSW IVCSW  READ WRITE FAULT TOTAL PERCENT COMMAND
  67125 root 590723 0 0 0 0   0.00% python3.8

Only reading from 5 logfiles; kernel is 12.3-RELEASE-p7; fail2ban built
from ports; truss reporting mostly "ERR#60 'Operation timed out'"...

Roger Marquis



Re: security/py-fail2ban quits working after some hours

2022-10-10 Thread Cy Schubert
In message , Roger 
Marquis w
rites:
> Cy Schubert wrote:
> > Michael Grimm writes:
> >> this is a recent stable/13-n252672-2bd3dbe3dd6 running =
> >> py39-fail2ban-1.0.1_2 and python39-3.9.14
> >> I have been running fail2ban for years now, but immediately after =
> >> upgrading py39-fail2ban fron 0.11.2 to 1.0.1 the fail2ban-server will =
> >> end up as a runaway process consuming all CPU time. This happens between =
> >> 4 to 24 hours after initial fail2ban-server startup.
>
> Am running fail2ban-1.0.1_2 and python38-3.8.14 did have a similar
> startup issue.  Could not use the 'service' command and had to restort
> to 'kill -9' to stop.  Fix for that was to delete /var/{run,db}/fail2ban/*
> and restart.
>
> Still seeing relatively high CPU utilization compared to the previous
> version though it rotates cores quickly.
>
>  PID USERNAME THR PRI NICE SIZE RES STATE C  TIMEWCPU COMMAND
>67125 root  17  200  74M 12M uwait 8 23.7H 102.94% python3.8
>
> Voluntary Context SWitches seem high compared to other processes though
> have no previous benchmark to compare.
>
>  PID USERNAME VCSW IVCSW  READ WRITE FAULT TOTAL PERCENT COMMAND
>67125 root 590723 0 0 0 0   0.00% python3.8
>
> Only reading from 5 logfiles; kernel is 12.3-RELEASE-p7; fail2ban built
> from ports; truss reporting mostly "ERR#60 'Operation timed out'"...

Could you and Michael by chance be using a dovecot jail?

https://github.com/fail2ban/fail2ban/issues/3370


-- 
Cheers,
Cy Schubert 
FreeBSD UNIX: Web:  https://FreeBSD.org
NTP:   Web:  https://nwtime.org

e^(i*pi)+1=0





Re: Many poudriere builds get stuck in lib-depends and run-depends phases and never succeed

2022-10-10 Thread Tatsuki Makino
Hello.

If the jail 13amd64-local-workstation for it has already been terminated, there 
are other ways to try it yourself with the following command.

# Commands to be executed on the host
poudriere jail -s -j 13amd64 -p local -z workstation # -j,-p,-z are probably 
for 13amd64-local-workstation :)
jls -v # In the meantime, check what jail is running
jexec 13amd64-local-workstation-n env -i "TERM=$TERM" /usr/bin/login -f -p root
# From here, the commands in the shell within the jail
service newsyslog onestart
service syslogd onestart
make -C /usr/ports/x11-toolkits/qt5-gui pkg-depends install-package
# Check something in the output here
less /var/log/messages
less /var/log/something
exit
# When you exit the shell, the jail is still there, so terminate it.
poudriere jail -k -j 13amd64 -p local -z workstation

Regards.

Yuri wrote on 2022/10/11 02:56:
> ---begin log---
> 
> ===>   qcsxcad-0.6.2.9_2 depends on shared library: libQt5Gui.so - not found
> ===>   Installing existing package /packages/All/qt5-gui-5.15.5p165.pkg
> [13amd64-local-workstation-job-03] Installing qt5-gui-5.15.5p165...
> [13amd64-local-workstation-job-03] `-- Installing mesa-dri-21.3.8...
> [13amd64-local-workstation-job-03] |   `-- Installing libXv-1.0.11_2,1...
> [13amd64-local-workstation-job-03] |   `-- Extracting libXv-1.0.11_2,1: 
> .. done
> [13amd64-local-workstation-job-03] |   `-- Installing libXvMC-1.0.12...
> [13amd64-local-workstation-job-03] |   `-- Extracting libXvMC-1.0.12: 
> .. done
> [13amd64-local-workstation-job-03] |   `-- Installing libunwind-20211201_1...
> [13amd64-local-workstation-job-03] |   `-- Extracting libunwind-20211201_1: 
> .. done
> [13amd64-local-workstation-job-03] |   `-- Installing libxshmfence-1.3_1...
> [13amd64-local-workstation-job-03] |   `-- Extracting libxshmfence-1.3_1: 
> . done
> [13amd64-local-workstation-job-03] |   `-- Installing llvm13-13.0.1_3...
> [13amd64-local-workstation-job-03] |   | `-- Installing lua53-5.3.6...
> [13amd64-local-workstation-job-03] |   | `-- Extracting lua53-5.3.6: 
> .. done
> [13amd64-local-workstation-job-03] |   `-- Extracting llvm13-13.0.1_3: .
> Failed to install the following 1 package(s): 
> /packages/All/qt5-gui-5.15.5p165.pkg
> *** Error code 1
> 
> ---end log---




Re: security/py-fail2ban quits working after some hours

2022-10-10 Thread Cy Schubert
In message , Roger 
Marquis w
rites:
> Cy Schubert wrote:
> > Michael Grimm writes:
> >> this is a recent stable/13-n252672-2bd3dbe3dd6 running =
> >> py39-fail2ban-1.0.1_2 and python39-3.9.14
> >> I have been running fail2ban for years now, but immediately after =
> >> upgrading py39-fail2ban fron 0.11.2 to 1.0.1 the fail2ban-server will =
> >> end up as a runaway process consuming all CPU time. This happens between =
> >> 4 to 24 hours after initial fail2ban-server startup.
>
> Am running fail2ban-1.0.1_2 and python38-3.8.14 did have a similar
> startup issue.  Could not use the 'service' command and had to restort
> to 'kill -9' to stop.  Fix for that was to delete /var/{run,db}/fail2ban/*
> and restart.
>
> Still seeing relatively high CPU utilization compared to the previous
> version though it rotates cores quickly.
>
>  PID USERNAME THR PRI NICE SIZE RES STATE C  TIMEWCPU COMMAND
>67125 root  17  200  74M 12M uwait 8 23.7H 102.94% python3.8
>
> Voluntary Context SWitches seem high compared to other processes though
> have no previous benchmark to compare.
>
>  PID USERNAME VCSW IVCSW  READ WRITE FAULT TOTAL PERCENT COMMAND
>67125 root 590723 0 0 0 0   0.00% python3.8
>
> Only reading from 5 logfiles; kernel is 12.3-RELEASE-p7; fail2ban built
> from ports; truss reporting mostly "ERR#60 'Operation timed out'"...
>
> Roger Marquis
>

I've been able to reproduce the problem here. Please try the attached patch 
obtained from our upstream. It fixes a dovecot regression that crept into 
the latest release.



From 5238999eb7b9383215feaff59d75b21981497653 Mon Sep 17 00:00:00 2001
From: Cy Schubert 
Date: Mon, 10 Oct 2022 21:03:28 -0700
Subject: [PATCH] security/py-fail2ban: Import fix for upsteam issue gh-3370

Fix dovecot jail causes 100% CPU usage (upstream GH issue 3370)

Reported by:	Michael Grimm 
		Roger Marquis 
Obtained from:	https://github.com/fail2ban/fail2ban/issues/3370
		Upstream commit ca2b94c5
MFH		2022Q4
---
 security/py-fail2ban/Makefile   |  2 +-
 security/py-fail2ban/files/patch-ISSUE-3370 | 87 +
 2 files changed, 88 insertions(+), 1 deletion(-)
 create mode 100644 security/py-fail2ban/files/patch-ISSUE-3370

diff --git a/security/py-fail2ban/Makefile b/security/py-fail2ban/Makefile
index dd076aeb1a05..789a7f54c903 100644
--- a/security/py-fail2ban/Makefile
+++ b/security/py-fail2ban/Makefile
@@ -1,6 +1,6 @@
 PORTNAME=	fail2ban
 DISTVERSION=	1.0.1
-PORTREVISION=	2
+PORTREVISION=	3
 CATEGORIES=	security python
 PKGNAMEPREFIX=	${PYTHON_PKGNAMEPREFIX}
 
diff --git a/security/py-fail2ban/files/patch-ISSUE-3370 b/security/py-fail2ban/files/patch-ISSUE-3370
new file mode 100644
index ..74e5a98cad01
--- /dev/null
+++ b/security/py-fail2ban/files/patch-ISSUE-3370
@@ -0,0 +1,87 @@
+From ca2b94c5229bd474f612b57b67d796252a4aab7a Mon Sep 17 00:00:00 2001
+From: sebres 
+Date: Tue, 4 Oct 2022 14:03:07 +0200
+Subject: [PATCH] fixes gh-3370: resolve extremely long search by repeated
+ apply of non-greedy RE `(?:: (?:[^\(]+|\w+\([^\)]*\))+)?` with following
+ branches (it may be extremely slow up to infinite search depending on
+ message); added new regression tests amend to gh-3210: fixes regression and
+ matches new format in aggressive mode too
+
+---
+ ChangeLog |  4 
+ config/filter.d/dovecot.conf  |  8 +---
+ fail2ban/tests/files/logs/dovecot | 22 ++
+ 3 files changed, 31 insertions(+), 3 deletions(-)
+
+diff --git config/filter.d/dovecot.conf config/filter.d/dovecot.conf
+index 0415ecb4..dc3ebbcd 100644
+--- config/filter.d/dovecot.conf
 config/filter.d/dovecot.conf
+@@ -7,19 +7,21 @@ before = common.conf
+ 
+ [Definition]
+ 
++_daemon = (?:dovecot(?:-auth)?|auth)
++
+ _auth_worker = (?:dovecot: )?auth(?:-worker)?
+ _auth_worker_info = (?:conn \w+:auth(?:-worker)? \([^\)]+\): auth(?:-worker)?<\d+>: )?
+-_daemon = (?:dovecot(?:-auth)?|auth)
++_bypass_reject_reason = (?:: (?:\w+\([^\):]*\) \w+|[^\(]+))*
+ 
+ prefregex = ^%(__prefix_line)s(?:%(_auth_worker)s(?:\([^\)]+\))?: )?(?:%(__pam_auth)s(?:\(dovecot:auth\))?: |(?:pop3|imap|managesieve|submission)-login: )?(?:Info: )?%(_auth_worker_info)s.+$
+ 
+ failregex = ^authentication failure; logname=\S* uid=\S* euid=\S* tty=dovecot ruser=\S* rhost=(?:\s+user=\S*)?\s*$
+-^(?:Aborted login|Disconnected|Remote closed connection|Client has quit the connection)(?:: (?:[^\(]+|\w+\([^\)]*\))+)? \((?:auth failed, \d+ attempts(?: in \d+ secs)?|tried to use (?:disabled|disallowed) \S+ auth|proxy dest auth failed)\):(?: user=<[^>]*>,)?(?: method=\S+,)? rip=(?:[^>]*(?:, session=<\S+>)?)\s*$
++^(?:Aborted login|Disconnected|Remote closed connection|Client has quit the connection)%(_bypass_reject_reason)s \((?:auth failed, \d+ attempts(?: in \d+ secs)?|tried to use (?:disabled|disallowed) \S+ auth|proxy dest auth failed)\):(?: user=<[^>]*>,)?(?: method=\S+,)? rip=(?:[^>]*(?:, session=<\S+>)

Re: Python version dependencies in pkg

2022-10-10 Thread Shane Ambler
On 10/10/22 2:41 pm, Tatsuki Makino wrote:
> Shane Ambler wrote on 2022/10/10 12:04:
>> On 8/10/22 3:06 pm, Tatsuki Makino wrote:
>>

> 
>> make -C /usr/ports/devel/py-six/ -V _PYTHON_VERSIONS -V 
>> _PYTHON_VERSION_MINIMUM -V _PYTHON_VERSION_MAXIMUM -V _VALID_PYTHON_VERSIONS 
>> -V FLAVORS
> 3.9 3.8 3.7 3.10 3.11 2.7
> 2.7
> 
> 3.9 2.7 3.8 3.7 3.10 3.11
> py39 py27
> 
>> If your FLAVORS list is shorter than that, maybe you have another
>> setting in make.conf causing that. Try removing your make.conf and see
>> if you get different values.

That matches the current ports tree, now add
BUILD_ALL_PYTHON_FLAVORS=yes to /etc/make.conf and FLAVORS matches the
full list of valid python versions - py39 py27 py38 py37 py310 py311


-- 
FreeBSD - the place to B...Software Developing

Shane Ambler