Re: setting sysctl net.ipv4.ping_group_range

2023-01-14 Thread Ángel
On 2023-01-02 at 13:55 -0800, Noah Meyerhans wrote:
> I'm entirely happy to reassign this request to systemd and have the
> setting applied more broadly.  The question that arises then is what
> to
> do about the file-level capabilities on the ping binary.  Ideally we
> drop them entirely (including the setuid fallback), but when?
> 
> I could leave things completely decoupled, and simply wait until
> systemd
> makes the change and then upload iputils and assume that anybody
> upgrading iputils is also upgrading systemd.  That seems to be what
> Fedora did, according to the fedoraproject.org wiki cited above.
> Alternatives would seem to involve some level of versioned
> dependency,
> which doesn't feel right.
> 
> noah


Currently iputils-ping's postinst does (at configure):


> if command -v setcap > /dev/null; then
> if setcap cap_net_raw+ep /bin/ping; then
> chmod u-s /bin/ping
> else
> echo "Setcap failed on /bin/ping, falling back to setuid" >&2
> chmod u+s /bin/ping
> fi
> else
> echo "Setcap is not installed, falling back to setuid" >&2
> chmod u+s /bin/ping
> fi


I would change that to:


if sysctl -n net.ipv4.ping_group_range | grep -q -v '^1  0$'; then # 
N.B. this is a tab
# No need for elevated rights for ping when ping_group_range feature is 
in use
setcap '' /bin/ping 2> /dev/null || true
chmod u-s /bin/ping
elif command -v setcap > /dev/null; then
# If we have setcap is installed, try setting cap_net_raw+ep,
# which allows us to install our binaries without the setuid
# bit.

if setcap cap_net_raw+ep /bin/ping; then
chmod u-s /bin/ping
else
echo "Setcap failed on /bin/ping, falling back to setuid" >&2
chmod u+s /bin/ping
fi
else
echo "No ping_group_range and setcap is not installed, falling back to 
setuid" >&2
chmod u+s /bin/ping
fi



if ping_group_range is set ("10" is the value when disabled, so
anything else means that the feature is configured for some groups),
then it disables capabilities and setuid. 
Else it goes the existing route.

It is checking it *in the running kernel*, so if both iputils-ping and 
procps (or whatever package is dropping the sysctl) are upgraded at the
same time, it would not detect until the *next* iputils upgrade after the
system is upgraded.

One might additionally check if there is a sysctl file there to enable the 
feature
(the problem would still bepresent if it is upgraded at the same time with
iputils-ping being upgraded *before* procps), but I think it's undesirable to 
leave
a non-working program after the update (for some reason, procps postinst 
doesn't seem
to automatically apply the newly dropped settings).

This code would also do the right thing™ if the existing kernel lacked support 
for the
feature, although this seems moot, as it has been there for more than a decade 
[1]. It 
might happen on kFreeBSD, if it's a non-standard kernel where such feature was 
kept out
(which doesn't make much sense), or procps is not installed (despite being an 
'important'
package).

Regards


1- https://lwn.net/Articles/422330/




Re: Yearless copyrights: what do people think?

2023-03-03 Thread Ángel
On 2023-02-26 at 18:43 +0200, Adrian Bunk wrote:
> On Wed, Feb 22, 2023 at 07:39:09AM -0700, Sam Hartman wrote:
> > As Jonas mentions, including the years allows people to know when
> > works
> > enter the public domain and the license becomes more liberal.
> > I think our users are better served by knowing when the Debian
> > packaging
> > would enter the public domain.
> 
> If this is the intention, then including the years is pointless.
> 
> Article 7 of the Berne Convention says:
> (1) The term of protection granted by this Convention shall be the
> life 
> of the author and fifty years after his death.
> 
> (6) The countries of the Union may grant a term of protection in
> excess 
> of those provided by the preceding paragraphs.

This.

The Copyright year for determining when the work enters public domain
can be useful for US works published before 1978, but little more.

Now most countries have settled on 70 years post mortem auctoris (and
while there are countries with shorter terms, with the US not following
the rule of the shorter term, you would probably still need to wait
those 70 years if doing some business there)

It could be useful when the author is unknown or there is corporate
authorship, in which case the US copyright term is 95 years from first
*publication* (which _may_ be different from the copyright year) or 120
years after creation.


Another can of worms is that the copyright year is often not well-
maintained. There may be program changes with no bump of the copyright
year, and you find as well projects that updating the number yearly,
regardless if there are actually changes or not (so the stated year
doesn't actually give the real information).




Re: A 2025 NewYear present: make dpkg --force-unsafe-io the default?

2025-01-13 Thread Ángel
(it seems the forwarding broke the thread 😕)

On 2025-01-13 at 11:10 +0100, Julien Plissonneau Duquène wrote:
> > normal dist-upgrade: 1m6.561s
> > eatmydata: 0m1.911s
> > force-unsafe-io: 0m9.096s
> 
> Thanks for these interesting figures. Could you please also provide 
> details about the underlying filesystem and storage stack, and the 
> effective mount options (cat /proc/fs/.../options)?
> 
> Cheers,

Sure.

This server has two (mechanical) disks, which are joined in a software
RAID 1, on top of which lies LUKS, which has an ext4 filesystem,
mounted with defaults,usrquota (i.e. rw,relatime,quota,usrquota,
data=ordered).

Then, docker is using aufs for the containers, which adds yet another
layer.

I'm afraid that if any of those is slowing things more than "normal",
it might be difficult to identify it.


reading /proc/fs/ext4/*/options:
> rw
> delalloc
> barrier
> user_xattr
> acl
> quota
> usrquota
> resuid=0
> resgid=0
> errors=continue
> commit=5
> min_batch_time=0
> max_batch_time=15000
> stripe=0
> data=ordered
> inode_readahead_blks=32
> init_itable=10
> max_dir_size_kb=0


Cheers




Re: A 2025 NewYear present: make dpkg --force-unsafe-io the default?

2025-01-12 Thread Ángel
Resending without the attachments, since the mailing list seems to have
fully eaten the message, rather than just delaying it until a moderator
approval, as I originally thought.

 Forwarded Message 
From: Ángel
To: debian-devel
Subject: Re: A 2025 NewYear present: make dpkg --force-unsafe-io the 
default?
Date: Sat, 04 Jan 2025 15:26:33 +0100

I have been using eatmydata apt-get for many years.

In fact, it often pays off to do an initial apt-get install -y 
eatmydata so that you can run the actual command as eatmydata apt-get.

This is specially noticeable when running pipelines or other similar
processes, where you install almost everything many times. On a
relatively-up-to-date system, not so much (but see below).


Of course, this makes sense if I'm working on a VM or a container,
where I know the system will not crash. If it actually did, the machine
would be rebuilt rather than needing the interrupted system to be
consistent.
On the other hand, if this machine was a pet on physical hardware, I
would probably keep them.


I seem to remember a big speedup adding eatmydata on a process that was
creating multiple images, from what used to be *hours* to something
_reasonable_ (whatever it was).


In order to do some benchmarks, I got a bullseye docker image that
happened to have a few old packages (e2fsprogs libcom-err2 libext2fs2
libsepol1 libss2 libssl1.1 logsave perl-base tzdata).


normal dist-upgrade: 1m6.561s

eatmydata: 0m1.911s

force-unsafe-io: 0m9.096s



I am attaching the full logs as benchmark-1.

The packages were downloaded but they were all fetching from an apt
proxy that had already processed this, so network factor is basically
nonexistent.


I then tried to stress it a bit more and install apache2 with all the
suggests and recommends, which pulls quite a number of packages:

> 0 upgraded, 3835 newly installed, 0 to remove and 0 not upgraded.
> Need to get 6367 MB of archives.
> After this operation, 19.4 GB of additional disk space will be used.
> 

This actually required multiple attempts for priming the cache, since
deb.debian.org seemed to be throttling me with 502 errors.


This longer install took:

normal: 245m57.148s = 4h 5m 57s

eatmydata: 36m56.748s

force-unsafe-io: 83m40.860s


Logs attached as benchmark-2.


Admittedly, this longer install does lots of other things, from mandb 
builds to creation of ssh keys, with very diverse postinsts, which
eatmydata would be affecting as well.
Still, those additional steps would be the same on the three instances
(for example, on benchmark-2 package fetching raises to 4 minutes, but
difference between configs is negligible¹), and I think apt/dpkg would
still be the main fsync() user, so this seems a realistic scenario of
what an end user experiences.





¹ $ grep ^Fetch  benchmark-2.txt 
Fetched 6367 MB in 4min 31s (23.5 MB/s) 
  
Fetched 6367 MB in 4min 29s (23.7 MB/s) 
  
Fetched 6367 MB in 4min 41s (22.6 MB/s) 
 


Versions used were:
ii  apt2.2.4amd64commandline package manager
ii  dpkg   1.20.13  amd64Debian package management system



Happy New Year everyone





Re: Project-wide LLM budget for helping people (WAS: Re: Complete and unified documentation for new maintainers

2025-01-12 Thread Ángel
On 2025-01-12 at 18:03 +, Andrew M.A. Cater wrote:
> Watching other people find and fix bugs, even in code they
> have written or know well, I can't trust systems built on modern
> Markov chains to do better, no matter how much input you give them, and
> that's without crediting LLMs as able to create good novel code..

This is something I have thought before, and which I find lacking in
most (all?) instances of the "let's program with an LLM" topic.

When a human¹ programs something, I expect there is a logical process
through which he arrives to the decision to write a set of lines of
code. This doesn't mean those lines will be the right ones, or bug-
free. Just that it makes sense.

For example, a program that does chdir("/"); at the beginning may
suggest it my run as a daemon, as this allows it not to block
filesystems from umounting.
If it has a number of calls to getuid(), setuid(), setresuid()... it
might switch to a different user.

However, if the code was generated by a LLM, all bets are off, since
the lines could make no sense at all for this specific program.


It wouldn't be that strange if a LLM asked to generate a control file
for a perl module could suggest a line such as
  Depends: libc6 (>= 2.34)
just because there are lots of packages with that dependency.²

A person could make a similar mistake of including unnecessary
dependencies if copying its work on an unrelated package, if not
properly cleaned. But how to fix those things if the mentor is a LLM?



¹ somewhat competent as a programmer
² hopefully, a LLM wouldn't be trained on the *output* of the
templates, though.




Re: A 2025 NewYear present: make dpkg --force-unsafe-io the default?

2025-01-02 Thread Ángel
On 2025-01-02 at 17:11 +0300, Michael Tokarev wrote:
> 02.01.2025 03:00, Aurélien COUDERC wrote:
> 
> > Sure but I wouldn’t know how to do that since I’m calling apt and
> > force-unsafe-io seems to be a dpkg option ?
> 
> echo force-unsafe-io > /etc/dpkg/dpkg.conf.d/unsafeio
> 
> before upgrade.
> 
> /mjt

Beware: this should actually be

 echo force-unsafe-io > /etc/dpkg/dpkg.cfg.d/unsafeio

:)




Re: Directory structure suggestion for configuration in /etc

2025-01-04 Thread Ángel
On 2024-12-22 at 08:37 +0100, Marc Haber wrote:
> Maybe our conffile handling should be modified to automatically
> accept comment-only changes to dpkg-conffiles.
> 
> Greetings
> Marc

That would require to tag what is considered a comment for each
conffile.

While most config files seem to use a # marker, other use ; (e.g.
php.ini), // or /* (firefox-esr) or even " (vimrc).

Some conffiles support multiple types of comments, while others none at
all. In some cases comments may be added at the end of lines, others
require that the comment starts the line, albeit whitespace is
*sometimes* allowed before.
In some cases conf files support a full shell syntax (you could comment
with if false; then ... fi), others only accept a tiny subset. In some
cases you can provide a multiline literal containing a comment-marker
(and there could be reasons for that, such as setting a banner with #
lines).

courier uses # comments, but lines beginning with ## are special, it
uses them so that their conffiles can be automatically upgraded (it is 
clever solution to the configuration-options-change-between-versions
problem, but again, specific to this package).


I do think it would be good that it was able to do some comment-merging 
(even if it is just telling you "the conffile changes are irrelevant"),
but it would require to at least define some classes of etc files on
which that would be supported (a compillation that would be useful for
other users as well, such as editors).


Regards




Re: Directory structure suggestion for configuration in /etc

2025-01-04 Thread Ángel
On 2024-12-20 at 11:42 -0800, Russ Allbery wrote:
> Maybe it would be more productive to take the preference disagreement as
> given and then try to figure out how to proceed given that we're never
> going to all agree on the best way of handling configuration files? Is
> there some way that we can try to accomodate both groups?

Create a diversion from /etc to /usr/share/etc-conffiles-as-packaged

Mkdir your almost-empty etc folder on /root/etc-changes

Use a unionfs or union mount to combine /usr/share/etc-templates-as-packaged + 
/root/etc-changes mounted on /etc


As the cherry on top of the cake, there could be a package that did
this automatically.


The trickiest part is probably that, for early daemons to correctly use
that /etc, you may need to do that early in the boot process, in the
initrd, which is less comfortable to code. Alternatively, perhaps a
stub config on /etc would be enough for bootstrapping this (systemd
might interfere with this, though).



(*) I'm not sure the dpkg-divert mechanism would be able to handle a
diversion of /etc, but adding such support if missing would be simple
and *consistent* with the rest of the system.




Re: Bug#1091394: nproc: add new option to reduce emitted processors by system memory

2025-01-16 Thread Ángel
On 2025-01-16 at 10:18 +0100, Helmut Grohne wrote:
> Hi Julien,
> 
> On Mon, Jan 13, 2025 at 07:00:01PM +0100, Julien Plissonneau Duquène
> wrote:
> > Let's start with this then. I implemented a PoC prototype [1] as a
> > shell
> > script that is currently fairly linux-specific and doesn't account
> > for
> > cgroup limits (yet?). Feedback is welcome (everything is open for
> > discussion
> > there, including the name) and if there is enough interest I may
> > end up
> > packaging it or submitting it to an existing collection (I am
> > thinking about
> > devscripts).
> 
> I'm sorry for not having come back earlier and thus caused duplicaton
> of
> work. I had started a Python-based implementation last year and then
> dropped the ball over other stuff. It also implements the --require-
> mem
> flag in the way you suggested. It parses DEB_BUILD_OPTIONS,
> RPM_BUILD_NCPUS and CMAKE_BUILD_PARALLEL_LEVEL and also considers
> cgroup
> memory limits. I hope this captures all of the feedback I got during
> discussions and research.
> 
> I'm attaching my proof of concept. Would you join forces and turn
> either
> of these PoCs into a proper Debian package that could be used during
> package builds? Once accepted, we may send patches to individual
> Debian
> packages making use of it and call OOM FTBFS a packaging bug
> eventually.
> 
> Helmut

The script looks good, and easy to read. It wouldn't be hard to
translate it to another language if needed to drop the python
dependency (but that would increase the nloc)

I find this behavior a bit surprising:

$ python3 guess_concurrency.py --min 10 --max 2
10

If there is a minimum limit, it is returned, even if that violates the
max. It makes some sense to pick something but I as actually expecting
an error to the above.

The order of processing the cpus is a bit awkward as well.

The order it uses is CMAKE_BUILD_PARALLEL_LEVEL, DEB_BUILD_OPTIONS,
RPM_BUILD_NCPUS, --detect , nproc/os.cpu_count()

But the order in the code is 4, 5, 3, 2, 1
Not straightforward.
Also, it is doing actions such as running external program nproc even
it if's going to be discarded later. (nproc is in an essential package,
I know, but still)

Also, why would the user want to manually specify between nptoc and
os.cpu_count()?

I would unconditionally call nproc, with a fallback to os.cpu_count()
if that fails (I'm assuming nproc may be smarter than os.cpu_count(),
otherwise one could use cpu_count() always)

I suggest doing:

def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument(
"--count",
action="store",
default=None,
metavar="NUMBER",
help="supply a processor count",
)

(...)

args = parser.parse_args()
guess = None
try:
if args.count:
guess = positive_integer(args.count)
except ValueError:
parser.error("invalid argument to --count")
guess = guess or guess_from_environment("CMAKE_BUILD_PARALLEL_LEVEL")
guess = guess or guess_deb_build_parallel()
guess = guess or guess_from_environment("RPM_BUILD_NCPUS")
if not guess:
try:
guess = guess_nproc()
finally:
guess = guess or guess_python()



Additionally, the --ignore argument of nproc(1) might be of use for
this script as well.


Best regards

Ángel






Re: What is going on with atomics? (was: How to conditionally patch a package based on architecture with debhelper?)

2025-01-16 Thread Ángel
On 2025-01-16 at 20:16 +0100, Johannes Schauer Marin Rodrigues wrote:
> Hi Simon,
> 
> Quoting Simon Richter (2025-01-16 16:52:19)
> > atomic operations require linking against libatomic — always have..
> > Some
> > architectures inline a few functions, which is how you get away
> > with omitting
> > the library on amd64 most of the time, but this is incorrect.
> > 
> > No architecture specific patch should be required here, adding
> > libatomic everywhere is fine, possibly via
> > -Wl,--push-options,--as-needed,-latomic,--pop-options
> > 
> > (although as-needed is likely default anyway)
> 
> I find this very interesting. I am fighting -latomic linking issues
> for quite a while now and what you just said makes me think that I am
> in severe lack of understanding here. Can you elaborate?

Hi Josch,

It's not that complex.¹ If you use an atomic, you should also be
linking  with atomic (i.e. -latomic). This is similar to using
-pthreads if you are using multiple threads, or -lresolv if you use
gethostbyname()

What makes things complicated here is that in most architectures, gcc
is able to inline everything, and libatomic is not really needed, so
you don't need to add -latomic and it just works.

...until it doesn't, such as in armel.

Just like you can use gethostbyname() directly, since this is provided
by libc, which is implicitly linked by default.
But you would need to specify -lresolv in Solaris, or it won't link.

So the solution would be just to add -latomic

Saying instead 
-Wl,--push-options,--as-needed,-latomic,--pop-options

is an advanced way to make the linker not to depend on libatomic if it
isn't actually needed.



> 
> 
> So I'm at a loss at deciding who is at fault and who should be fixing
> something. You say this should work out-of-the-box on amd64 mostly but
> everything I have come across handling std::atomic types builds fine on all
> architectures -- except armel. So which is the thing that gets things wrong
> here?
> 
>  - armel?
>  - g++?
>  - cmake?
>  - oneTBB?
>  - vcmi?
> 
> Who should be patched? Relevant Debian bugs are #1089991 and #1088922
> 
> I'd very much welcome to be enlightened by you or anybody else on
> this list.. :)

I think the bug is in oneTBB, it is that that should be patched.
Your last patch at 
https://github.com/uxlfoundation/oneTBB/issues/1454#issuecomment-2267455977
seems appropriate.


vcmi should not need to care if oneTBB uses atomics or not.


armel is not strictly at fault (although you might blame the
architecture for not providing a way as easy as amd does)

g++ (actually gcc has the same issue) can be partially to blame.
gcc made an interface which is not too usable. Then it was reused for
g++, albeit the language specification doesn't state that requirement.
It *should* be improved at gcc level, but at the present time the spec
is that you will need to link to libatomic, and that's what oneTBB
should be doing.


Best regards


¹ Ok, after writing this whole email, maybe a bit 😅





Re: [PHP-DEV] Suhosin patch disabled by default in Debian php5 builds

2012-02-02 Thread Ángel González
Stefan Esser wrote:
> And there are many many good reasons, why Suhosin must be external to PHP.
> The most obvious one is that the code is clearly separated, so that not 
> someone of the hundred PHP commiters accidently breaks a safe guard.
That's not a justification to keep it as a patch.
Safe guards could prefectly be skipped by a commit which changed near
code, reestructures the function or creates a different path, *even if
the patch still applies*.
So you would still need to check for all kind of unexpected changes anyway.

If it were in core, at least anyone changing the related code would
realise that it's there, and could take that into account for not
breaking it. If it's maintained by someone else as a patch, that simply
won't happen.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4f2ad757.3070...@gmail.com



Bug#688979: ITP: python-doublex -- doublex is a test doubles framework for the Python platform

2012-09-27 Thread Miguel Ángel García
Package: wnpp
Severity: wishlist
Owner: "Miguel Ángel García" 

* Package name: python-doublex
  Version : 0.6.1
  Upstream Author : David Villa Alises 
* URL : https://bitbucket.org/DavidVilla/python-doublex
* License : GPL
  Programming Lang: Python
  Description : doublex is a test doubles framework for the Python platform

 doublex is a test doubles framework for the Python platform. Test doubles
 frameworks are also called mocking or isolation frameworks. doublex can be
 used as a testing tool or as a Test Driven Development tool.
 .
 It generates stubs, spies, and mock objects using a minimal interface. It
 support hamcrest matchers both stub definitions and spy checking. All
 assertions are done using hamcrest assert_that(). Moreover, it’s been designed
 to make your tests less fragile when possible.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20120927192206.15653.92011.reportbug@nightcrawler