Re: syslog-ng: identified for time_t transition but no ABI in shlibs

2024-04-05 Thread Attila Szalay
Hello Steve,

I do understand your concern about the time_t structure change and I
also admit that there are some room of improvement how the syslog-ng
package manage the library versioned dependency, but this is not the
solution.

Based on https://wiki.debian.org/NonMaintainerUpload, the binNMU should
be careful with the upload to not make the package uninstallable (You
have to make sure that your binary-only NMU doesn't render the package
uninstallable. This could happen when a source package generates arch-
dependent and arch-independent packages that have inter-dependencies
generated using dpkg's substitution variable $(Source-Version).)

there are also explicit requirements for the package maintainers, to
prevent this, by exactly doing the opposite of what the patch suggest
(https://wiki.debian.org/binNMU, declaring dependency between an arch:
all to an arch: any package: Depends: foo (>= ${source:Version}), foo
(<< ${source:Version}.1~))

Please forgive my ignorance if it is already discussed somewhere, but
it does not seems to be wise to go against the current official policy,
just because of one change.



Re: Upstream dist tarball transparency (was Re: Validating tarballs against git repositories)

2024-04-05 Thread Guillem Jover
Hi!

On Wed, 2024-04-03 at 23:53:56 +0100, James Addison wrote:
> On Wed, 3 Apr 2024 19:36:33 +0200, Guillem wrote:
> > On Fri, 2024-03-29 at 23:29:01 -0700, Russ Allbery wrote:
> > > On 2024-03-29 22:41, Guillem Jover wrote:
> > > I think with my upstream hat on I'd rather ship a clear manifest (checked
> > > into Git) that tells distributions which files in the distribution tarball
> > > are build artifacts, and guarantee that if you delete all of those files,
> > > the remaining tree should be byte-for-byte identical with the
> > > corresponding signed Git tag.  (In other words, Guillem's suggestion.)
> > > Then I can continue to ship only one release artifact.
> >
> > I've been pondering about this and I think I might have come up with a
> > protocol that to me (!) seems safe, even against a malicious upstream. And
> > does not require two tarballs which as you say seems cumbersome, and makes
> > it harder to explain to users. But I'd like to run this through the list
> > in case I've missed something obvious.
> 
> Does this cater for situations where part of the preparation of a source
> tarball involves populating a directory with a list of filenames that
> correspond to hostnames known to the source preparer?
> 
> If that set of hostnames changes, then regardless of the same source
> VCS checkout being used, the resulting distribution source tarball could
> differ.

> Yes, it's a hypothetical example; but given time and attacker patience,
> someone is motivated to attempt any workaround.  In practice the
> difference could be a directory of hostnames or it could be a bitflag
> that is part of a macro that is only evaluated under various nested
> conditions.

I'm not sure whether I've perhaps misunderstood your scenario, but if
the distributed tarball contains things not present in the VCS, then
with this proposal those can then be easily removed, which means it
does not matter much if they differ between same tarball generation
(I mean it matters in the sense that it's an alarm sign, but it does
not matter in the sense that you can get at the same state as with a
clean VCS checkout).

The other part then is whether the remaining contents differ from what
is in the VCS.

If any of these trigger a difference, then that would require manual
review. That of course does not exempt one from reviewing the VCS, it
just potentially removes one avenue for smuggling artifacts.

> To take a leaf from the Reproducible Builds[1] project: to achieve a
> one-to-one mapping between a set of inputs and an output, you need to
> record all of the inputs; not only the source code, but also the build
> environment.
> 
> I'm not yet convinced that source-as-was-written to distributed-source-tarball
> is a problem that is any different to that of distributed-source-tarball to
> built-package.  Changes to tooling do, in reality, affect the output of
> build processes -- and that's usually good, because it allows for
> performance optimizations.  But it also necessitates the inclusion of the
> toolchain and environment to produce repeatable results.

In this case, the property you'd gain is that you do not need to trust
the system of the person preparing the distribution tarball, and can
then regenerate those outputs from (supposedly) good inputs from both
the distribution tarball, and _your_ (or the distribution) system
toolchain.

The distinction I see from the reproducible build effort, is that in
this case we can just discard some of the inputs and outputs and go
from original sources.

(Not sure whether that clarifies or I've talked past you now. :)

Thanks,
Guillem



Bug#1068440: ITP: emacs-corfu-terminal -- Corfu popup on terminal

2024-04-05 Thread Xiyue Deng
Package: wnpp
Severity: wishlist
Owner: Xiyue Deng 

* Package name: emacs-corfu-terminal
  Version : 0.7
  Upstream Author : Akib Azmain Turja 
* URL or Web page : https://codeberg.org/akib/emacs-corfu-terminal
* License : GPL-3
  Programming lang: Emacs Lisp
  Description : Corfu popup on terminal

 Corfu uses child frames to display candidates. This makes Corfu
 unusable on terminal. This package replaces that with popup/popon,
 which works everywhere.

I intend to maintain this package in the Debian Emacsen Team
.



Bug#1068441: ITP: emacs-popon -- Pop floating text on an Emacs window

2024-04-05 Thread Xiyue Deng
Package: wnpp
Severity: wishlist
Owner: Xiyue Deng 

* Package name: emacs-popon
  Version : 0.13
  Upstream Author : Akib Azmain Turja 
* URL or Web page : https://codeberg.org/akib/emacs-popon
* License : GPL-3
  Programming lang: Emacs Lisp
  Description : Pop floating text on an Emacs window

 Popon allows you to pop text on a window, what we call a popon. Popons
 are window-local and sticky, they don't move while scrolling, and they
 even don't go away when switching buffer, but you can bind a popon to a
 specific buffer to only show on that buffer.

This package is a dependency of emacs-corfu-terminal[1].  I intend to
maintain this package in the Debian Emacsen Team
.

[1] https://bugs.debian.org/1068440



Re: becoming a debian member under a not-real name [now on debian-devel]

2024-04-05 Thread Bastian Germann

Am 05.04.24 um 12:31 schrieb John Paul Adrian Glaubitz:

Hello,

On Tue, 2024-04-02 at 12:40 +0200, Pierre-Elliott Bécue wrote:

If at all, this whole situation is a plea to finally end single-person
maintainership of packages.

It is my opinion that all packages should be either in collab maint or
in teams, and that any team member should feel free to have oversight
and make uploads on their own.


Before that can happen, we would have to ensure that all package sources
are moved to Salsa otherwise it would be difficult for other DDs to make
changes to the source.


It is not actually needed. What I see as a blocker is having automatic
source scan/import (and repack) rules for every package to ensure that
importing a new upstream version is done in the expected way. Many packages
do not have a d/watch file at all or manually repack files.



Re: syslog-ng: identified for time_t transition but no ABI in shlibs

2024-04-05 Thread Bernd Zeimetz
Hi Attila,

On Fri, 2024-04-05 at 09:47 +0100, Attila Szalay wrote:
> Based on https://wiki.debian.org/NonMaintainerUpload, the binNMU
> should
> be careful

I think you are confusing binNMUs and NMUs.
See https://wiki.debian.org/binNMU

They are handled more or less automatic as soon as a rebuild is needed
for a transition.

You might want to read the bug report again, it basically says that no
NMU will be uploaded, but you package will break if you don't apply the
attached patch. And the binNMU that will most likely break it will
happen.

The way how the time_t change happens was discussed for a *long* time,
you are a bit late with complaints.


Bernd


-- 
 Bernd ZeimetzDebian GNU/Linux Developer
 http://bzed.dehttp://www.debian.org
 GPG Fingerprint: ECA1 E3F2 8E11 2432 D485  DD95 EB36 171A 6FF9 435F



New editor integrations for Debian packaging files (LSP support)

2024-04-05 Thread Niels Thykier

Hi

Sent to d-devel and d-mentors, since the members of both lists are the 
target audience. If you reply, please consider whether both lists should 
see the message (and prune the recipient list accordingly as necessary).



In response to a recent thread on debian-devel about editor support for 
Debian packaging files, I have spend some time providing better 
experience when working with Debian packaging files. Concretely, I have 
added a Language Server (per LSP spec) to `debputy` that supports files 
like "debian/control" and "debian/copyright" (machine readable variant 
only).



With this language server and a LSP capable editor, you will get:


 * Online diagnostics (in editor linting result).
   - Instant gratification for common or simple errors. No need to wait
 for a package build to run `lintian`. Also, if `debputy` knows a
 solution, it will provide quick-fixes for the issue.


 * Online documentation ("hover docs") for relevant fields and values.
   - You do not have to remember what things mean nor context switch
 to look up documentation! It is served directly in your editor
 when you request it via your editors "hover doc" feature.


 * Context-based completion suggestions for known value or fields.
   - As an example, the completion will not suggest completion for a
 field already in the current stanza of debian/control (since
 that would lead to an error).


 * Automatic pruning of trailing white spaces!
   - If your editor supports the "on save" feature, the language server
 will trim away trailing white space in the Debian packaging files.
 (it is a small thing, but it is still one paper cut less!)


The diagnostics can also be used without the language server. Which can 
be used for CI or when you do not have an LSP capable editor. It works 
_without_ building the package first (unlike lintian) and it does have 
an --auto-fix option (also, unlike lintian).
  On the other hand, the diagnotics/linter is not a replacement for 
lintian. The `debputy` LSP/Linter is solely aimed at providing editor 
support Debian packaging files, which is a much narrower scope than lintian.



For those interested, there are some screenshots from emacs with this 
language server at: https://people.debian.org/~nthykier/lsp-screenshots/


As mentioned, any LSP capable editor should work. Which includes:
 * neovim
 * vim (with `vim-youcompleteme`)
 * atom/pulsar
 * VS Code (unsurpsingly, since Microsoft created the LSP spec)
 * Eclipse (via the `lsp4e` plugin)
 * ... and many other editors


# Getting started

To use these new features, you will need:

# Preferably, 0.1.26 (unstable)
# Though, dh-debputy/0.1.24 (testing) will do
$ apt install dh-debputy python3-lsprotocol python3-pygls

# If you want online spellchecking, then add:
$ apt install python3-hunspell hunspell-en-us

# Check if debputy has config glue suggestions for your editor
# - note: the output may contain additional editor specific
#   dependencies.
$ debputy lsp editor-config
$ debputy lsp editor-config EDITOR_NAME

# Once your editor is configured correctly, it should start the
# language server on its own when you open a relevant file.

# Using the diagnostics without the language server.
#  - Check --help for additional features.
$ debputy lint

Additionally, for the editor features, you will need an LSP capable 
editor and relevant editor configuration glue for said editor. The 
`debputy lsp editor-config` command will list known editors and `debputy 
lsp editor-config EDITOR` will provide an example editor glue. These 
examples are maintained on a "best-effort" basis.


At the moment, `debputy` knows about `emacs` with `eglot` (built-in in 
emacs/29) and `vim` with `vim-youcompleteme`. For other editors, you 
will have to figure out the glue config - though, feel free to submit a 
MR against the `debputy` repo, so others can benefit from it.


If you end up having to do your own editor config glue, you can start 
the language server via `debputy lsp server` (check `--help` if you need 
TCP or WS integration).




## Supported files

For the full list, please see `debputy lsp features`. The `diagnostics 
(lint)` lines also explain where `debputy lint` has support.


Version 0.1.26 of dh-debputy adds support for `debian/tests/control`. 
For `emacs`, the next version of dpkg-dev-el (RFS'ed in #1068427) is 
also needed for `debian/tests/control` support.



# Future work

 * There will be bugs. Lots of bugs.
   - Issues and MRs are welcome at
 https://salsa.debian.org/debian/debputy/-/issues
   - BTS bugs (with or without patches) also work. 🙂

 * There are lot more diagnostics that could be triggered.  Feature
   requests welcome (see above).

 * Most of the hover documentation could probably use a review.

 * Most of the editor glue provided via `debputy lsp editor-config`
   should probably end up in packages somehow.

 * All this requires De

Re: Validating tarballs against git repositories

2024-04-05 Thread Simon McVittie
On Sat, 30 Mar 2024 at 14:16:21 +0100, Guillem Jover wrote:
> in my mind this incident reinforces my view that precisely storing
> more upstream stuff in git is the opposite of what we'd want, and
> makes reviewing even harder, given that in our context we are on a
> permanent fork against upstream, and if you include merge commits and
> similar, there's lots of places to hide stuff. In contrast storing
> only the packaging bits (debian/ dir alone) like pretty much every
> other downstream is doing with their packaging bits, makes for an
> obviously more manageable thing to review and not get drown into,
> more so if we have to consider that next time perhaps the long-game
> gets played within Debian.

I'd like to push back against this, because I'm not convinced by this
reasoning, and I'd like to provide another point of view to consider.

I find that having the upstream source code in git (in the same form that
we use for the .orig.tar.*, so including Autotools noise, etc. if present,
but excluding any files that we exclude by repacking) is an extremely
useful tool, because it lets me trace the history of all of the files
that we are treating as source - whether hand-written or autogenerated -
if I want to do that. If we are concerned about defending against actively
malicious upstreams like the recent xz releases, then that's already a
difficult task and one where it's probably unrealistic to expect a high
success rate, but I think we are certainly not going to be able to achieve
it if we reject tools like git that could make it easier.

Am I correct to say that you are assuming here that we have a way to
verify the upstream source code out-of-band (therefore catching the xz
backdoor is out-of-scope here), and what you are aiming to detect here
is malicious changes that exist inside the Debian delta, more precisely
the dpkg-source 1.0 .diff.gz or 3.0 (quilt) .debian.tar.*? If that's your
threat model, then I don't think any of the modes that dgit can cope with
are actually noticeably more difficult than a debian/-only git repo.

As my example of a project that applies patches, I'm going to use
bubblewrap, which is a small project and has a long-standing patch that
changes an error message in bubblewrap.c to point to Debian-specific
documentation; this makes it convenient to tell at a glance whether
bubblewrap.c is the upstream version or the Debian version.

There are basically three dgit-compatible workflows, with some minor
adjustments around handling of .gitignore files:

- "patches applied" (git-debrebase, etc.):
  This is the workflow that proponents of dgit sometimes recommend,
  and dgit uses it as its canonicalized internal representation of
  the package.
  The git tree is the same as `dpkg-source -x`, with upstream source code
  included, debian/ also included, and any Debian delta to the upstream
  source pre-applied to those source files.
  In the case of bubblewrap, if we used this workflow, after you clone
  the project, bubblewrap.c would already have the Debian-specific error
  message.
  (dgit --split-view=never or dgit --quilt=dpm)

- "patches unapplied" (gbp pq, quilt, etc.):
  This is the workflow that many of the big teams use (at least Perl,
  Python, GNOME and systemd), and is the one that bubblewrap really uses.
  The git tree is the same as `dpkg-source -x --skip-patches`, with
  upstream source code included, and debian/ also included.
  Any Debian delta to the upstream source is represented in debian/patches
  but is *not* pre-applied to the source files: for example, in the case
  of bubblewrap, after you clone
  https://salsa.debian.org/debian/bubblewrap.git and view bubblewrap.c,
  it still has the upstream error message, not the Debian-specific one.
  (dgit --quilt=gbp or dgit --quilt=unapplied; I use the latter)

- debian/ only:
  This is what you're advocating above.
  The git tree contians only debian/. If there is Debian delta to the
  upstream source, it is in debian/patches/ as usual.
  (dgit --quilt=baredebian* family)

In the "patches applied" workflow, the Debian delta is something like
`git diff upstream/VERSION..debian/latest`, where upstream/VERSION must
match the .orig.tar.* and debian/latest is the packaging you are reviewing.
Not every tree is a valid one, because if you are using 3.0 (quilt),
then there is redundancy between the upstream source code and what's in
debian/patches: it is an error if the result of reverting all the patches
does not match the upstream source in the .orig.tar.*, modulo possibly
some accommodation for changes to **/.gitignore being accepted and ignored.
To detect malicious Debian changes in 3.0 (quilt) format, you would want
to either check for that error, or review both the direct diff and the
patches.

Checking for that error is something that can be (and is) automated:
I don't use this workflow myself, but as far as I'm aware, dgit will
check that invariant, and it will fail to build your source package
if the invariant doesn't

Re: Validating tarballs against git repositories

2024-04-05 Thread Colin Watson
On Fri, Apr 05, 2024 at 03:19:23PM +0100, Simon McVittie wrote:
> I find that having the upstream source code in git (in the same form that
> we use for the .orig.tar.*, so including Autotools noise, etc. if present,
> but excluding any files that we exclude by repacking) is an extremely
> useful tool, because it lets me trace the history of all of the files
> that we are treating as source - whether hand-written or autogenerated -
> if I want to do that. If we are concerned about defending against actively
> malicious upstreams like the recent xz releases, then that's already a
> difficult task and one where it's probably unrealistic to expect a high
> success rate, but I think we are certainly not going to be able to achieve
> it if we reject tools like git that could make it easier.

Strongly agree.  For many many things I rely heavily on having the
upstream source code available in the same working tree when doing any
kind of archaeology across Debian package versions, which is something I
do a lot.

I would hate to see an attacker who relied on an overloaded maintainer
push us into significantly less convenient development setups, thereby
increasing the likelihood of overload.

> In the "debian/ only" workflow, the Debian delta is exactly the contents
> of debian/. There is no redundancy, so every tree is in some sense a
> valid one (although of course sometimes patches will fail to apply, or
> whatever).

I'd argue that this, and the similar error case in patches-unapplied, is
symmetric with the error case in the patches-applied workflow (although
it's true that there is redundancy in _commits_ in the latter case).

-- 
Colin Watson (he/him)  [cjwat...@debian.org]



Re: Validating tarballs against git repositories

2024-04-05 Thread Luca Boccassi
On Fri, 5 Apr 2024 at 16:18, Colin Watson  wrote:
>
> On Fri, Apr 05, 2024 at 03:19:23PM +0100, Simon McVittie wrote:
> > I find that having the upstream source code in git (in the same form that
> > we use for the .orig.tar.*, so including Autotools noise, etc. if present,
> > but excluding any files that we exclude by repacking) is an extremely
> > useful tool, because it lets me trace the history of all of the files
> > that we are treating as source - whether hand-written or autogenerated -
> > if I want to do that. If we are concerned about defending against actively
> > malicious upstreams like the recent xz releases, then that's already a
> > difficult task and one where it's probably unrealistic to expect a high
> > success rate, but I think we are certainly not going to be able to achieve
> > it if we reject tools like git that could make it easier.
>
> Strongly agree.  For many many things I rely heavily on having the
> upstream source code available in the same working tree when doing any
> kind of archaeology across Debian package versions, which is something I
> do a lot.
>
> I would hate to see an attacker who relied on an overloaded maintainer
> push us into significantly less convenient development setups, thereby
> increasing the likelihood of overload.

+1

gbp workflow is great, easy to review and very productive



Re: xz backdoor

2024-04-05 Thread Pierre-Elliott Bécue
Pierre-Elliott Bécue  wrote on 31/03/2024 at 14:31:37+0200:
> Wookey  wrote on 31/03/2024 at 04:34:00+0200:
>
>> On 2024-03-30 20:52 +0100, Ansgar 🙀 wrote:
>>> Yubikeys, Nitrokeys, GNUK, OpenPGP smartcards and similar devices.
>>> Possibly also TPM modules in computers.
>>> 
>>> These can usually be used for both OpenPGP and SSH keys.
>>
>> Slightly off-topic, but a couple of recent posts have given me the
>> same thought:
>>
>> Can someone point to good docs on this?  I've had a yubikey for 3/4 of
>> a year now but have not yet worked out how I put my GPG key in it. (or
>> if it should be another key, or a subkey, or whatever). So I'm not
>> actually using it yet.
>>
>> PEB also described what sounded like a very sensible way to manage
>> keys (using subkeys) in one of these threads but I don't know how to
>> do that myself.
>
> I have started (and never finished) a blog article on how I use my
> YubiKey and what config I put in it. I'll definitely try to get it out
> before the end of next week. I'll probably extend it to mention the
> creation of GPG subkeys etc.
>
> I would also be happy if it helps my fellow DDs to try making an article
> about some basic crypto concepts regarding PGP, RSA et al. But not in
> the same piece I guess.

Hello,

For those interested in: I've published two articles:

 1. One on PGP subkeys https://pe.becue.phd/openpgp-subkeys
 2. One on the OpenPGP module of YubiKeys:
https://pe.becue.phd/yubikey-workfow-openpgp

I'm happy to receive any kind of constructive feedback.

-- 
PEB


signature.asc
Description: PGP signature


Issue with Linux File Picker Opening in Background

2024-04-05 Thread Lite
Dear Debian Team,

Whenever I try to select a file in any application, the file picker (or file 
manager) opens in the background, giving me a very hard time right now.

Could you please provide some guidance on how to resolve this issue? Any 
assistance or suggestions you could offer would be greatly appreciated.

Thank you for your attention to this matter.

Best regards

Re: xz backdoor

2024-04-05 Thread Daniel Leidert
Am Freitag, dem 29.03.2024 um 23:20 +0100 schrieb Moritz Mühlenhoff:
> Russ Allbery  wrote:
> > I think this question can only be answered with reverse-engineering of the
> > backdoors, and I personally don't have the skills to do that.
> 
> In the pre-disclosure discussion permission was asked to share the payload
> with a company specialising in such reverse engineering. If that went
> through, I'd expect results to be publicly available in the next days.

If there is a final result, can we as a project share the results on a
prominent place? Or at least under d-devel-announce and/or d-security-
announce? I was also wondering about what could have been compromised,
what data might have been stolen, etc. And there is so many sources to
follow right now. So sharing the final results would be great. 

Regards, Daniel


signature.asc
Description: This is a digitally signed message part


Re: xz backdoor

2024-04-05 Thread Sirius
In days of yore (Fri, 05 Apr 2024), Daniel Leidert thus quoth: 
> Am Freitag, dem 29.03.2024 um 23:20 +0100 schrieb Moritz Mühlenhoff:
> > Russ Allbery  wrote:
> > > I think this question can only be answered with reverse-engineering of the
> > > backdoors, and I personally don't have the skills to do that.
> > 
> > In the pre-disclosure discussion permission was asked to share the payload
> > with a company specialising in such reverse engineering. If that went
> > through, I'd expect results to be publicly available in the next days.
> 
> If there is a final result, can we as a project share the results on a
> prominent place? Or at least under d-devel-announce and/or d-security-
> announce? I was also wondering about what could have been compromised,
> what data might have been stolen, etc. And there is so many sources to
> follow right now. So sharing the final results would be great. 

If you have followed the discussion on Openwall ML, there have been a
couple of posts that points at both a general overview of what the code
did, an analysis of how the data was hidden in the 'corrupt' xz archive
under testing and some analysis of the actual .o which suggested this was
not just a backdoor but a remote-code-execution portal almost.

It has been interesting reading for sure, and the way they hid it, it does
really not look like your average script-kiddie doing this. I have my own
private suspicions about potential culprits being behind this but I figure
it is wiser to keep that under my hat as it were.

By the looks of things, both here and elsewhere, this was caught just in
the nick of time, meaning it did not make it out into the wild (at least
true for Debian and Fedora) so nothing was compromised. It it eerie the
parallels to Clifford Stoll and The Cuckoo's Egg though. I second the
request for sharing "final results" but I recognise that it may be weeks
still before that may happen.

-- 
Kind regards,

/S



Re: xz backdoor

2024-04-05 Thread Paul R. Tagliamonte
There's also a very through exploration at https://github.com/amlweems/xzbot

Including, very interestingly, a discussion of format(s) of the
payload(s), and a mechanism to replace the backdoor key to play with
executing commands against a popped sshd, as well as some code to go
along with it.

  paultag

On Fri, Apr 5, 2024 at 2:19 PM Daniel Leidert  wrote:
>
> Am Freitag, dem 29.03.2024 um 23:20 +0100 schrieb Moritz Mühlenhoff:
> > Russ Allbery  wrote:
> > > I think this question can only be answered with reverse-engineering of the
> > > backdoors, and I personally don't have the skills to do that.
> >
> > In the pre-disclosure discussion permission was asked to share the payload
> > with a company specialising in such reverse engineering. If that went
> > through, I'd expect results to be publicly available in the next days.
>
> If there is a final result, can we as a project share the results on a
> prominent place? Or at least under d-devel-announce and/or d-security-
> announce? I was also wondering about what could have been compromised,
> what data might have been stolen, etc. And there is so many sources to
> follow right now. So sharing the final results would be great.
>
> Regards, Daniel



-- 
:wq



Re: xz backdoor

2024-04-05 Thread Christoph Anton Mitterer
On Fri, 2024-04-05 at 20:47 +0200, Sirius:
> > If there is a final result, can we as a project share the results on a
> > prominent place? Or at least under d-devel-announce and/or d-security-
> > announce? I was also wondering about what could have been compromised,
> > what data might have been stolen, etc. And there is so many sources to
> > follow right now. So sharing the final results would be great. 
> 
> If you have followed the discussion on Openwall ML, there have been a
> couple of posts that points at both a general overview of what the code
> did, an analysis of how the data was hidden in the 'corrupt' xz archive
> under testing and some analysis of the actual .o which suggested this was
> not just a backdoor but a remote-code-execution portal almost.

I've also tried to follow the various lists and RE efforts on discord.
My understanding is, that this hasn't been completed, yet, and while
people seem to *believe* that it looks like as if the backdoor didn't
do anything else than just waiting for commands sent to an sshd (which
might make all people safe, that haven't had sshd running or at least
not publicly listening) - that's not yet 100% sure, or is it?

And given how much effort these attackers spent in hiding the stuff, it
doesn't seem impossible, that they hid even more.


I'd think that most servers are safe, simply because they typically run
stable.
But I guess many people run their personal computers on some
rolling/unstable release.


So I fully agree with Daniel Leidert, that it would be really nice if
there was - eventually, one the reverse engineering has been finished -
some form of official confirmation, whether and when people that had
the compromised xz-utils installed may fell 100% safe or possibly
pwned.


Especially:
- whether any hidden calling home was found (so far not, but this may
  e.g. happen only under special conditions, like some matching host
  or user names), which would possibly compromise private keys, etc.
- whether any commands could have automatically been pulled from remote
- whether any attack vectors other than via sshd were found
- whether some other forms of infestations (adding new user, keys to
  authorized_keys, etc.) was possible

or whether all that can be ruled out for sure.

And whether that has been confirmed for both versions of the maleware
that were distributed.

In short:
- Can people that had it, but had no sshd running and/or had it only
  running behind some firewall/NAT/etc. feel 100% safe to be not
  further compromised?

And while it wouldn't affect me personally, some have also asked
whether:
- They'd be safe it access to sshd was only restricted via
  hosts.allow/hosts.deny.


Last but not least, it would be nice if Debian had some trustworthy
experts which can actually confirm those findings.
No offence meant against those people doing the reverse engineering,
but in principle anyone on the internet could just claim anything and
make people wrongly feel safe.



Cheers,
Chris.



Re: Validating tarballs against git repositories

2024-04-05 Thread Marco d'Itri
On Apr 05, Simon McVittie  wrote:

> I find that having the upstream source code in git (in the same form that
> we use for the .orig.tar.*, so including Autotools noise, etc. if present,
> but excluding any files that we exclude by repacking) is an extremely
> useful tool, because it lets me trace the history of all of the files
> that we are treating as source - whether hand-written or autogenerated -
> if I want to do that. If we are concerned about defending against actively
I agree: it would be untinkable for me to not have the complete history 
immediately available while I am working on a package.

-- 
ciao,
Marco


signature.asc
Description: PGP signature